The Top 3 Things Holding Back Your Kubernetes Strategy and How to Fix Them
Check out some of the challenges that Kubernetes presents to enterprises and what you can do about them.
Join the DZone community and get the full member experience.
Join For Free
A reported 69% of organizations surveyed by CNCF (the Cloud Native Computing Foundation) use Kubernetes to manage containers. As Kubernetes becomes the new standard for container orchestration, a new set of challenges result and enterprises are often spending significant time focused on managing their Kubernetes deployments rather than innovating. The most common barriers are around security vulnerabilities and lack of trust, scarcity of skills and expertise, and navigating storage needs.
Why Kubernetes?
We can all agree that our industry is prone to hype and sometimes we feel the pressure of adopting a new technology simply because our peers and competitors do. Before diving into challenges of adopting Kubernetes (K8s), let’s remind ourselves of why someone should (or shouldn’t) bother.
The primary benefit of K8s is to increase infrastructure utilization through the efficient sharing of computing resources across multiple processes. As your organization adopts more workloads of varying performance envelopes, the art of bean packing hundreds of microservices across available computing resources becomes more and more critical. Kubernetes is the master of dynamically allocating computing resources to fill the demand. This allows organizations to avoid paying for computing resources they are not using.
One commonly-cited reason to adopt Kubernetes is to increase developer productivity. We do not believe one should cave in under pressure of doing so. Kubernetes is a complex beast to learn and use effectively, yet another set of APIs to learn. We’ll get back to this point later in the article.
So if the benefits above are worth it, let’s dive into the top three typical challenges faced by organizations who are going all-in on Kubernetes: security, storage, and expertise!
Challenge #1: Securing Your Kubernetes Deployment
Simply adding Kubernetes to your arsenal of data center tooling does not automatically make anything less or more secure. If anything, Kubernetes brings an additional set of knobs to tweak and offers the new and quite elegant set of features for addressing security and compliance concerns.
The challenge, in this case, is having to maintain two separate control planes for infrastructure and application security. Let me illustrate this by example. Traditionally, Linux servers are managed via a secure shell (SSH) protocol. SSH underpins commonly accepted configuration management tools like Chef, Ansible and Puppet and nearly all CI/CD tools deploy applications via SSH. Most organizations have adopted robust policies and invested in modern tooling to implement security and enforce compliance for SSH, but the presence of Kubernetes creates another “door” into your infrastructure.
While the early adopter crowd of K8s enthusiasts will happily lament the death of SSH, in reality, all of us have to wait many years before the existing tooling that depends on SSH can be retired. Meanwhile, the challenge will be to have RBAC policies synchronized between SSH access and Kubernetes access. How do you ensure that developers never touch or see production data via both Kubernetes API and SSH? The new wave of open source solutions, like Teleport, allow security professionals to do just that.
Another challenge with Kubernetes security is that it’s simply another layer to consider, a layer that must be “plugged” on top of layers you already have. We have had an operating system (OS)-level security, then we’ve adopted security layer offered by public and private clouds, and now Kubernetes is offering security controls that operate on the microservices level and the SDN level across your entire deployment. This adds complexity.
The good news is that these tools do not automatically make something less secure and they map quite well to existing security practices. The challenge is to learn and make efficient use of them, especially in combination with “lower” level security controls offered by the cloud providers, i.e. combining both network security groups with Kubernetes network policies.
Challenge #2: Navigating Kubernetes Storage Needs
Kubernetes's initial design has favored applications that can be described as stateless network services, processes that consume only CPU, network, and memory, and do not need any storage. Unfortunately, the world we live in is very much stateful and most applications deliver value by manipulating data that must be stored somewhere.
Why has Kubernetes developed a reputation as being problematic for stateful applications like databases? Because in order to deliver the highest possible infrastructure utilization (its primary benefit), Kubernetes needs to move applications around from one server to another. If a database is “chained” to a local storage array, it can’t be moved away from it.
Consider several strategies to address this problem.
- Do not run databases under Kubernetes. In fact, most distributed databases like Cassandra or MongoDB predate Kubernetes and have similar capabilities that are built-in, and they’re perfectly capable of managing their own cluster state, replication, load-balancing and auto-scaling. Moreover, most cloud providers offer fully-managed databases and there is simply no need to host another one inside your Kubernetes cluster. The concept of external services should be used to make external databases visible to applications running inside of K8s.
- Use network-attached storage. Generally speaking, not relying on locally attached storage is always preferable, because it allows you to scale storage resources independently from compute. This strategy does not universally work for all access patterns, but Kubernetes is now well-equipped to handle cases when it does. Features like stateful sets and database-aware operators make it possible to run stateful applications inside Kubernetes.
- Choose storage solutions aligned with microservices architecture principles and adhere to the requirements of container-native data services. This new generation of storage products is closely aligned with the scaling model of Kubernetes. Such storage solutions can be directly integrated with the application layer for portability, scaling and data protection. In other words, consider adopting container-optimized storage and data management systems such as Portworx.
Challenge #3: Ensuring Your Enterprise Has the Right Expertise
Finally, I want to come back to the myth of “increased developer productivity” and talk a bit about complaints we have witnessed when helping companies adopt Kubernetes in production.
Just consider for a moment the list of technologies a typical cloud-native application developer is exposed to on a daily basis. Let’s break it down to several sets:
- The core set, that is, skills required to build and maintain an application. It obviously must include the problem domain knowledge, the basics of computer engineering, proficiency in programming languages, libraries, and frameworks, the modern tooling for operating on large code bases as part of a team. This is what most of us consider as “core skills” for a programmer.
- The systems APIs, i.e., having to deal with the intricacies of Linux distributions and network protocols an application has to run on. Numerous high-profile languages and frameworks such as Java had promised “write once, run anywhere” nirvana, but it hasn’t materialized yet.
- Cloud APIs. Prior to the cloud era, the infrastructure engineering was traditionally separated from application developers, but in today’s world, it is now a programmer’s job to provision a software-defined network with load balancers and SSL certificates for their application. In fact, the fusion of infrastructure and software is celebrated and it is a defining characteristic of a cloud-native application!
- Packaging APIs. Mastering tools like Docker in combination with specialized package managers for different programming languages. Committing code to a repository is never enough, a developer is expected to provide instructions for a CI/CD solution to move her code from git to production.
Arguably, anything outside of the first set should be considered a direct hit against the programmer’s productivity. Just AWS APIs alone can take years to master, and nothing ever becomes obsolete or replaced; the tech world only keep building new layers.
The challenge here is that Kubernetes isn’t powerful and isn’t high-level enough to make any of these obsolete. For example, it is not quite a replacement of virtualization (although it greatly reduces its usefulness and running K8s on bare metal in some cases can be quite attractive) and it does not free you from having to know what Docker is. There is no cure: to effectively employ Kubernetes, your development teams must allocate yet another slice of their brains to learning yet another set of APIs.
But knowing how to use Kubernetes is only half the battle. Keeping it alive is another. The dreaded “management” becomes quite a burden. Various “single pane of glass” management solutions will make this job easier for your teams, but the need for management and the need for acquiring knowledge will not go away.
The Last Word
There are promising developments for addressing each of these three challenges. Cloud providers are offering fully-hosted Kubernetes solutions. There are also solutions for packaging and securing Kubernetes applications for cloud or even air-gapped environments to meet compliance requirements. And, finally, Kubernetes even has its own package management tool, Helm, which makes it easy for developers to package the app, its related components, and dependencies. However you choose to tackle these challenges, you are in good company in turning towards Kubernetes as a strategy for deploying your applications.
Opinions expressed by DZone contributors are their own.
Comments