The Gorilla Guide to Kubernetes in the Enterprise - Chapter 1: The Changing Development Landscape
Containers, serverless computing, and cloud-native technology have greatly changed the software industry.
Join the DZone community and get the full member experience.
Join For FreeSoftware Delivery Has Evolved
The way we build and run applications has changed dramatically over the years. Traditionally, apps ran on top of physical machines. Those machines eventually became virtual. In both cases, the application and all its dependencies were installed on top of an OS.
This relationship between OS and applications created a tightly-coupled bundle of everything needed to run that application. Each virtual machine (VM) ran a complete OS, no matter how big or small the VM was, or how demanding the application on top.
Each OS provided a complete execution environment for applications: this included binaries, libraries and services, as well as compute, storage, and networking resources.
Drawbacks of this approach are the inherent size and volume of VMs. Each OS is many gigabytes in size, which not only requires storage space, but also increases the memory footprint.
This size and tight coupling results in a number of complexities in the VM lifecycle and the applications running on top. Without a good way of separating different layers in a VM (OS, libraries, services, application binaries, configuration, and data), swapping out different parts in this layer cake is nearly impossible. For this reason, once a VM is built, configured and running, it usually lives on for months or years. This leads to pollution and irreversible entangling of the VM in terms of OS, data, and configuration.
New versions of the OS, its components, and other software inside the VM are layered on top of the older version. Because of this, each inplace upgrade creates potential version conflicts, stability problems, and ballooning of uncleaned recent versions on disk. Maintaining this ever-increasing complexity is a major operational pain point, and often leads to downtime.
This places an unbalanced operational focus on the OS and underlying layers, instead of the place it should be: the application.
Operational friction, an unnecessarily large and perennial operating environment, and lack of decoupling between layers are all in sharp contrast with how lean and agile software development works. It's no surprise, then, that the traditional approach doesn't work for modern software development.
In the new paradigm, developers actively break down work into smaller chunks, create (single-piece) flow, and take control and ownership over the pipeline that brings code from local testing all the way to production. Containers, microservices, and cloud-native application design are facilitating this.
The Benefits of Creating Cloud-Native Applications
Let's break down how these technologies enable modern software development methodologies.
Containers
First and foremost, containers package up only the parts of the application unique to that container, like the business logic. Containers share the underlying OS and often common libraries, frameworks, or other pieces of middleware. This results in much lighter packages (containers are usually megabytes, instead of the gigabytes that are typical with VMs), and are clearly decoupled from the layer cake underneath.
Because of this decoupling, a new one-to-one relationship between the container image and the application unlocks the full benefits of containers.
A container can be spun up on different hosts, clusters or clouds without any change to the container or its definition. Decoupling from the OS underneath makes it simpler to maintain those underlying layers. The OS becomes a commodity to developers: a black box layer that just works. Developers no longer have to think about that layer.
This allows easier and automated updating and changing of the layers underneath. Because the layers are decoupled, production systems are rarely patched or updated. The new version of the OS is deployed fresh, and the old system with the old version is discarded.
The same goes for a new version of the application inside the container: instead of updating the container, a new container with the new version is deployed, and traffic is diverted to that new container. The old one is killed as soon as the new container is operating correctly.
This approach is called an "immutable infrastructure," and is defined as a clearer separation between the application, operating system, and the underlying infrastructure. This allows easier and more independent changes in each layer. Operationally, this makes a world of difference as different teams can take more ownership and responsibility of each layer.
With this decoupling comes a new interface between the OS and container, giving developers freedom to deploy new versions of their applications without intervention from the teams managing the layers underneath. This gives developers more control over when to deploy what to production. Rolling back a bad release or redirecting more traffic to a new version is a simple task, without friction or dependency on the infrastructure or operations teams.
In turn, the infrastructure and operations teams can take more control over their parts of the layer cake, enabling paradigms like Infrastructure-as-Code that allow treating infrastructure as a software development problem. This enables solutions like creating declarative code that instructs a pipeline of infrastructure automation software how to create and configure infrastructure.
Cloud-Native Services
While containers are a great fit for custom business logic and code, many of the moving parts of an application stack are standard and common components. Instead of re-inventing the wheel, using commercially available and/or open source software for those components makes sense. Other than a few niche and extreme use cases, why build your own database engine, caching layer or web server?
That's why many public cloud providers offer those components and middleware as a service; the goal is to make consumption as frictionless as possible. Developers can simply configure the entire software stack with a few clicks, using databases, proxies, web servers, message queues and much more.
But cloud-native means more than simply consuming existing technology as a service. The Cloud Native Computing Foundation, or CNCF for short, defines "cloud-native" as follows:
Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.
This definition puts the focus on more than just a set of technological tools. It encompasses business outcomes like scalability, dynamic behavior, and resiliency; standards regarding certain patterns of methodology and design like immutability and frequent changes; and a focus on operational excellence with abilities like decoupling, observability, and automation.
It's this comprehensive approach that makes cloud-native so appealing: it's not just about technology, but about how tech is used within organizations, and what outcomes are achieved.
This creates an integrated ecosystem of products that checks all the boxes of CNCF's definition, and which organizations can use to hit the ground running. As such, it eliminates much of the groundwork processes like design, integration, and implementation that otherwise takes a lot of time.
CNCF's biggest and highest-velocity projects are integrated and broad, including:
Kubernetes is a container orchestration platform that helps users build, scale and manage modern applications and their dynamic lifecycles. The cluster scheduler capability lets developers focus on code rather than ops. Kubernetes future-proofs infrastructure management on-premises or in the cloud, without vendor or cloud provider lock-in.
Prometheus delivers real-time monitoring, alerting, and time series database capabilities (including powerful queries and visualizations) for cloud-native applications. It's the de facto standard for monitoring container-based infrastructure. Prometheus provides needed visibility into, and troubleshooting for, cloud-native architectures.
Envoy is a distributed proxy designed for single services and applications, as well as a universal data plane designed for large microservice service mesh architectures. Envoy runs alongside every application, and abstracts the network by providing common features in a platform-agnostic manner. It's easy to visualize problem areas via consistent observability, tune overall performance, and add substrate features in a single place.
CoreDNS is a DNS server, written in Go. It can be used in a multitude of environments because of its flexibility.
Besides these four, there are many additional projects that are relevant to Kubernetes in 2019. The most notable include:
- Fluentd. This is a unified logging tool that helps users better understand what's happening in their environments by providing a unified layer for collecting, filtering, and routing log data.
- NATS. This is a simple, high-performance open source message queueing and publish/subscribe system for cloud-native applications.
- gRPC. This is a high-performance, open source universal RPC framework.
- Containerd. This is an industry-standard container runtime with an emphasis on simplicity, robustness and portability.
- Linkerd. An ultralight service mesh for Kubernetes and beyond, Linkerd provides observability, reliability, and security for microservices, with no code change required.
- CNI. The Container Network Interface provides networking for Linux containers.
- CSI. This stands for Container Storage Interface. It provides storage for Linux containers. See more on Kubernetes Storage and CSI.
- Helm. This is the package manager for Kubernetes. Helm is the best way to find, share, and use software built for Kubernetes.
Of course, there are numerous software projects not part of the CNCF that fit into the ecosystem very well. Examples include Istio, the popular service mesh, and Terraform, the composable infrastructure automation tool.
Why Kubernetes
Let's look at the CNCF's most popular project, Kubernetes.Kubernetes is the orchestration layer that manages containers across a group of physical servers or VMs. Kubernetes is specifically designed to manage the ephemeral nature of thousands of containers spinning up, scaling up, and winding down.
Kubernetes manages the versioning of containers, figures out how containers can talk to each other over the network, exposes services running inside containers, and handles storage considerations. It also deals with failed hardware and maintaining container availability.
Kubernetes makes it easy to quickly ramp up container instances to match spikes in demand. New versions can be put into production in small increments (these are known as canary deployments.)
Kubernetes can be thought of as a container-centric computing platform. It has much of the flexibility of Infrastructure-as-a-Service (in terms of managing compute, storage and networking resources), with the developer-friendly workflows and constructs found in Platforms-as-a-Service on top. These include deployment, scaling, load balancing, logging, monitoring, and composition of application containers across clusters of container hosts.
Kubernetes is more than just a container orchestrator or resource scheduler. On the infrastructure side, it aims to remove the toil of orchestrating compute, network, and storage resources. It also abstracts those constructs so application developers and operators can focus entirely on container-centric workflows and self-service operation.
On the container side, Kubernetes provides a platform for building customized workflows and higher-level automation. It integrates into the continuous integration/continuous delivery (CI/CD) pipelines developers use to bring code into production in a controlled, tested and automated fashion.
The platform brings together infrastructure operations and software development by design. It uses declarative, infrastructure-agnostic constructs to describe applications and how they interact, without the traditional close ties into the underlying infrastructure.
Kubernetes runs just as well on traditional on-premises infrastructure stacks as it does for third-party service providers and public cloud environments.
Developer Agility
We've seen that containers unlock the full benefits of Agile software development and operations. Creating smaller, portable container images that contain only the application increases developer velocity and the speed through the pipeline into production, which massively reduces the inertia of each release.
Creating "flow" is one of the core principles of Agile software development, and reducing the size of the piece of code moving through the developer's delivery pipeline without being blocked is critical.
Containers are a major reduction in size compared to VMs, and help developers push code to production in smaller increments, and more often. This limits the impact of mistakes, as any changes causing the mistake will be small; this makes them quick and easy to roll back, due to image immutability. Developers can simply roll back to a previous version, without having to worry very much about data consistency or data loss.
A major cause of mistakes in production is the lack of environmental consistency across development and production environments. With containers, the image is identical and immutable, no matter where it runs; this is true even if the underlying resources differ massively. So, if it runs on the developer's laptop, it will run in production.
A common blocker of the pipeline is the separation of concerns between development and operations. This typically leads to a dependency of the developer on the Ops team to install the new application version during deployment, often by using configuration management tooling like Chef or a package manager.
With containers, images are built automatically at build/release time and deployed as an atomic unit. This allows Ops to influence how the images are built asynchronous to the deployment, while developers have full control during deployment. In a container configuration, dependencies are added as lines of code and either specify a specific version of that dependency, or depend on the latest version at build time. This helps in managing security breaches and keeping code secure (and lean), as dependencies are updated automatically and often.
While Kubernetes and the common underlying container runtime themselves don't deploy source code or build your application, they're easily integrated into CI/CD workflows and pipelines.
Cost Management
Similar to the move from physical to virtual servers, moving to containers optimizes resource usage. This lowers the cost of each application, as it runs more efficiently. As discussed before, a major difference between a VM and a container is its relative size: a container is magnitudes smaller than a VM. This makes it nimbler and more flexible, especially from a cost perspective. This allows the container to run where it's cheaper, an important consideration in ephemeral compute instances where the application is non-production or resilient itself.
Secondly, more but smaller containers are more easily scheduled across multiple hosts as compared to fewer but bigger VMs. This is called the "bin-packing problem."
The dynamic nature of containers in a Kubernetes cluster, utilizing the Horizontal Pod Autoscaler, means that application cost goes hand-in-hand with application demand. While this is fantastic for scalability, it can sometimes have unintended consequences on the budget. The plethora of options muddies the waters pretty quickly. Even with the relatively simple cost model of physical servers, assigning a fraction of cost to a certain team, department, or application is difficult. Add in the complex offering of public cloud instance types, and it becomes near impossible to assign cost.
There are some solutions for cost control, like CloudHealth, CoreOS Operator Framework, and Platform9's Arbitrage that help assign cost across the multitude of layers in Kubernetes and the underlying public cloud or on-premises platform. These solutions figure out the charges for consumed infrastructure cost and assign them to clusters, namespaces, and pods inside Kubernetes. Besides the pods that run the actual applications, these solutions also split pods into administrative, monitoring, logging, and idle resources.
But in reality, many people apply the "guesstimate" method, especially in the early phases of containerization projects. And however unscientific it is, this method does fit in with the reasoning behind the move toward containers and developer agility: create flow, increase velocity, and remove hurdles in their pipeline to production.
Only after implementation does cost control start to matter. The tangible benefits of the system have started to manifest in day-to-day operations; after that, the downsides, including cost sprawl, need to be reined in, but only after it's proven successful.
And here lies the true cost/benefit analysis: it's not just about controlling infrastructure costs, but developer costs, too: how much quicker can they move to production or roll back a faulty release, for instance, and what financial consequences, good or bad, does that have?
This brings us to the fundamental value of agility: smaller iterations of work. This means going through the "discover-plan-build-review" cycle much, much more often. Optimizing the developer's flow makes them more efficient, which in turn makes them less expensive. As more and more companies invest in software development, the cost balance is shifting from infrastructure to developer; given this, it makes more sense to optimize the higher-cost items.
Accelerate Project Timelines
For many developers, Kubernetes means "less friction." A production-grade Kubernetes platform usually includes monitoring, logging, tracing, release management for blue/green or canary deployments, automated testing in the pipeline, and automated deployment.
All of these reduce friction, making it cheap and easy to deploy software to production. This means less management overhead and associated processes, including approvals, change advisory boards, and release/deployment managers.
This is especially true for development in microservices environments, where boundaries between teams are carried forward in the services and products they deliver. These microservices are loosely-coupled, small and independent pieces of a larger network of services that make up an application. All these services can be deployed and managed independently and dynamically, making it easier for a team to put a new piece of code into production without
dependence on another team.
This gives teams the freedom to decide if they want to bring in an existing (paid-for) solution, or if they'll build it themselves. While existing solutions may be more expensive up front, the delivery timeframe is usually compressed significantly.
A Note on Kubernetes for Stateful Applications
Yes, containers can be stateful, too.
In the earlier stages of Kubernetes and container maturity, it was often believed that containers were only suitable for stateless workloads, and that storing any data or state in a container was impossible. This belief is wrong; both the underlying container runtime (which is often Docker) and Kubernetes fully support a diverse variety of workloads, including stateful applications. Containers themselves are ephemeral and immutable, meaning that any file system changes are lost after the container shuts down. But there are plenty of options for adding stateful storage to a container, ranging from NFS network shares to S3 object stores and full-fledged data center storage options like a SAN. Many organizations deploying Kubernetes actually use existing storage assets for stateful storage. Another popular storage option is a hyper-converged storage deployment pattern like the open source CEPH or VMware's VSAN.
Learn more about the key concepts and components in the Kubernetes storage architecture, and also check out this webinar on Kubernetes for stateful applications.
There's More
On the next posts we'll dive deeper into the Kubernetes architecture, how to deploy Kubernetes on different types of infrastructure, Kubernetes use cases, and best practices for operating Kubernetes in Production, at scale.
Published at DZone with permission of Vamsi Chemitiganti, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments