Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Best Practices for Multi-Cloud Kubernetes

DZone's Guide to

Best Practices for Multi-Cloud Kubernetes

From our recently-released Containers Guide, this author examines some of the best guidelines for deploying Kubernetes clusters.

· Cloud Zone ·
Free Resource

See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity.

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

The 2018 State of the Cloud Survey shows that 81% of enterprises use multiple clouds. Public cloud computing services, and modern infrastructure platforms enable agility at scale. As businesses seek to deliver value faster to their customers, it's no surprise that both public and private cloud adoption continue to grow at a healthy pace. In fact, according to the latest figures from IDC, worldwide server shipments increased 20.7% year-over-year to 2.7 million units in Q1 of 2018, and revenue rose 38.6%, the third consecutive quarter of double-digit growth!

Another exciting mega-trend is the emergence of containers as the best way to package and manage application components. Kubernetes has been widely accepted as the best way to deploy and operate containerized applications. And, one of the key value propositions of Kubernetes is that it can help normalize capabilities across cloud providers.

But with these advances also come new complexities. Containers address several DevOps challenges, but also introduce a new layer of abstraction that needs to be managed. Kubernetes addresses some of the operational challenges,  but not all. And, Kubernetes is a distributed application that itself needs to be managed.

In this article, we will discuss the best practices and guidelines to address the key operations challenges for the successful deployment and operations of Kubernetes clusters across different cloud providers. The perspective we will take is that of an IT operations team building an enterprise Kubernetes strategy for multiple internal teams.

Image title


1. Leverage Best-Of-Breed Infrastructure

All cloud providers offer storage and networking services, as do on-premises infrastructure vendors. A question that arises when considering multi-cloud strategies is whether to use each providers' capabilities, or an abstraction layer. While both approaches can work, it's always prudent to try and minimize abstraction layers and utilize the vendor-native approach. For example, rather than run an overlay network in AWS, it may be best to use the Kubernetes CNI (Container Network Interface) plugin from AWS that offers native networking capabilities to Kubernetes. This approach also enables  use of other services like security groups and IAM.

2. Manage Your Own (Upstream) Kubernetes Versions

Kubernetes is a fast-moving project, with new releases available every three months. A key decision to make is whether you want a vendor to test and curate Kubernetes releases for you or whether you want to allow your teams to directly use upstream releases.

As always, there are pros and cons to consider. Using a vendor-managed Kubernetes provides benefits of additional testing and validation. However, the Cloud Native Computing Foundation (CNCF) Kubernetes community itself has a mature development, test, and release process. The Kubernetes project is organized as a set of Special Interest Groups (SIGs), and the Release SIG is responsible for processes to ensure the quality and stability of each new release. The CNCF also provides a Kubernetes Software Conformance program for vendors to prove that their software is 100% compatible with the Kubernetes APIs.

Within an enterprise, it's best to use stable releases for production. However, some teams may want clusters with pre-GA features. The best bet is to provide teams with the flexibility of choosing multiple validated upstream releases, or trying newer versions as needed at their own risk.

3. Standardize Cluster Deployments via Policies

There are several important decisions to make when installing a Kubernetes cluster. These include:

  1. Version: the version of Kubernetes components to use.

  2. Networking: the networking technology to use, configured via a CNI (Container Networking Interface) plugin.

  3. Storage: the storage technology to use, configured via a CSI (Container Storage Interface) plugin.

  4. Ingress: the Ingress Controller to use for load-balancer and reverse proxy of external requests to your application services.

  5. Monitoring: an add-on for monitoring Kubernetes components and workloads in the cluster.

  6. Logging: a centralized logging solution to collect, aggregate, and forward logs from Kubernetes components as well as application workloads in the cluster to a centralized logging system.

  7. Other Add-Ons: other services that need to run as part  of a cluster, like DNS and security components.

While it's possible to go through these decisions for each cluster install, it is more efficient to capture the cluster installation as a template or policy, which can be easily reused. Some examples of this could be a Terraform script or a Nirmata Cluster Policy. Once the cluster installation is automated, it can also be invoked as part of higher-level workflows, like fulfilling self-service provisioning request from a service catalog.

4. Provide End-To-End Security

There are several items to consider for container and Kubernetes security, such as:

Image Scanning: container images need to be scanned for vulnerabilities before they are run. This step can be implemented as part of the Continuous Delivery pipeline before images are allowed into an enterprise's private registry.

Image Provenance: while image scanning checks vulnerabilities, image provenance ensures that only "trusted"  images are allowed into a running cluster or environment.

Host & Cluster Scanning: in addition to securing images, cluster nodes and also need to be scanned. Additionally, routinely running the Center for Internet Security (CIS) Benchmarks for Securing Kubernetes is a best practice.

Segmentation & Isolation: even when multi-tenancy may not be a hard requirement, it's best to plan to share clusters across several heterogeneous workloads for increased efficiencies and greater cost savings. Kubernetes provides constructs for isolation (e.g. Namespaces and Network Policies) and for managing resource consumption (Resource Quotas).

Identity Management: in a typical enterprise deployment, user identity is provided by a central directory. Regardless of where clusters are deployed,  access user identity must be federated so that it can be easily controlled and applied in a consistent manner.

Access Controls: while Kubernetes does not have the concept of a user, it provides rich controls for specifying roles and permissions. Clusters can leverage default roles or use custom role definitions that specify sets of permissions. It's important that all clusters within an enterprise have common definitions for these roles and a way to manage them across clusters.

While each of these security practices can be applied separately, it makes sense to view these holistically and plan for a security strategy that works across multiple cloud providers. This can be achieved using security solutions like AquaSec, Twistlock, and others in conjunction with platforms like Nirmata, OpenShift, etc.

5. Centralize Application Management

As with security, managing applications on Kubernetes clusters requires a centralized and consistent approach. While Kubernetes offers a comprehensive set of constructs that can be used to define and operate applications, it does have a built-in concept of an application. This is actually a good thing as it enables flexibility in supporting different application types and allows different ways of building more opinionated application platforms on Kubernetes.

However, there are several common attributes and features that any Kubernetes application management platform must provide. The top concerns for centralized application management for Kubernetes workloads are discussed below.

Application Modeling & Definition

Users need to define their application components and also compose applications from existing components. A core design philosophy in Kubernetes is its declarative nature, where users can define the desired state of the system. The Kubernetes workloads API offers several constructs to define the desired state of resources. For example, deployments can be used to model stateless workload components. These definitions are typically written as a set of YAML or JSON manifests. However, developers need to organize and manage these manifests,  typically in a Version Control System (VCS) like Git.

While developers will want to define and manage portions of the application manifests,  other portions of these manifests specify operational policies and may be specific to runtime environments. These portions are best managed by operations teams. Hence, the right way to think of how an application manifests itself is as a pipeline that is composed dynamically before deployment and updates.

A Kubernetes project that helps with some of these challenges is Helm, a package manager for Kubernetes. It makes it easy to group, version, deploy, and update applications as Helm Charts.

Kubernetes application platforms must provide easy ways to model, organize, and construct applications manifests and Helm Charts, with proper separation of concerns between development and operational resources. The platform must also provide validation of the definitions to catch common errors as early as possible, along with easy ways to reuse application definitions.

Environments — Application Runtime Management

Once applications are modeled and validated,  they need to be deployed to clusters. However, the end goal is to reuse clusters across different workloads for greater efficiencies and increased cost savings. Hence, it's best to decouple application runtime environments from clusters and to apply common policies and controls to these environments.

Image title

Kubernetes allows creating virtual clusters using Namespaces and Network Policies. Kubernetes application platforms should make it easy to leverage these constructs and create environments with logical segmentation, isolation, and resource controls.

Change Management

In many cases, runtime environments will live long lives and changes will need to be applied to these environments in a controlled manner. The changes may originate from a build system or from an upstream environment in the delivery pipeline.

Kubernetes application platforms need to offer integrations into CI/CD tools and monitor external repositories for changes. Once changes are detected, they should be validated and then handled based on each environment's change management policies. Users should be review and accept changes or fully automate the update process.

Application Monitoring

Applications may be running in several environments and in different clusters. With regards to monitoring,  it's important to have the means to separate the signal from the noise and focus on application instances. Hence, metrics, states, and events need to be correlated with application and runtime constructs. Kubernetes application platforms must offer integrated monitoring with automated granular tagging so that it's easy for users to drill-down and focus on application instances in any environment.

Application Logging

Similar to monitoring,  logging data needs to be correlated with application definitions and runtime information and should be accessible for any application components. Kubernetes application platforms must be able to stream and aggregate logs from different running components. If centralized logging system is used, it's important to apply the necessary tags to be able to separate logs from different applications and environments and also manage access across teams and users.

Alerting & Notifications

To manage service levels, it's essential to be able to define custom alerts based on any metric, state change, or condition.  Once again, proper correlation is required to separate alerts that require immediate action from the rest. For example, if the same application deployment is running in several environments like dev-test, staging, and production, it is important to be able to define alerting rules that only trigger for production workloads. Kubernetes application platforms must

be able to provide the ability to define and manage granular alerting rules that are environment and application-aware.

Remote Access

Cloud environments tend to be dynamic, and containers elevate the dynamic nature to a new level. Once problems are detected and reported, its essential to have a quick way to access the impacted components in the system. Kubernetes application platforms must provide a way to launch a shell into running containers, and to access container runtime details, without having to access cloud instances via VPN and SSH.

Incident Management

In a Kubernetes application, it's possible that a container ex- its and is quickly restarted. The exit may be part of a normal workflow, like an upgrade, or may be due to an error like an out-of-memory condition. Kubernetes application platforms must be able to recognize failures and capture all details of the failure for offline troubleshooting and analysis.

Summary

Containers and Kubernetes allow enterprises to leverage a common set of industry best practices for application operations and management across cloud providers. All major cloud providers, and all major application platforms,  have committed support of Kubernetes. This includes Platform-as-a-Service (PaaS) solutions where developers provide code artifacts and the platforms do the rest, Container-as-a-Ser- vice (CaaS) solutions where developers provide container images and the platform does the rest, and Functions-as-a- Service (FaaS) solutions where developers simply provide functions and the platform does the rest. Kubernetes has become the new cloud-native operating system.

When developing a multi-cloud Kubernetes strategy, enterprises must consider how they wish to consume infrastructure services, manage  Kubernetes component versions, design and manage  Kubernetes clusters, define common layers of security, and application management.

This article is featured in the new DZone Guide to Containers: Development and Management. Get your free copy for more insightful articles, industry statistics, and more! 

Cloud Foundry saves app developers $100K and 10 weeks on average per development cycle. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity. Find out what people love about the industry standard cloud application platform.

Topics:
cloud ,kubernetes ,kubernetes cluster ,cluster management ,container ,containerization ,application management

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}