Enterprise Hybrid Cloud and Federated Kubernetes
Enterprise Hybrid Cloud and Federated Kubernetes
Fast-evolving Kubernetes federation technology (made manageable by SaaS) may finally deliver on long-promised benefits of open hybrid cloud.
Join the DZone community and get the full member experience.Join For Free
Insight into the right steps to take for migrating workloads to public cloud and successfully reducing cost as a result. Read the Guide.
Cloud vendors and visionaries have long promoted the hybrid cloud — a coordinated mix of on-premise cloud with single or multiple public cloud resources — as an ideal path for balancing needs for security and control; agility and flexibility; opportunities for cost-arbitrage, optimization, and reduction; complexity management; and for mitigating risks of single-vendor lock-in.
In a recent blog post, my colleague Akshai Parthasarathy and I outlined key strategic considerations for deciding on an enterprise hybrid cloud approach and made a case for hybrid’s benefits to virtually all large- and medium-sized organizations as well as select smaller businesses.
They make the point that gaining the maximum benefit from a hybrid (or any other cloud) strategy demands adopting newer technologies for composing, hosting, orchestrating, and managing applications. These include:
- Containers, which hide complexity, vastly accelerate deployment, and enable fluent workload mobility across disparate hosts.
- Container hosting and orchestration frameworks, which provide a standardized, abstract host environment for workloads; and automate deployment, lifecycle management, resilience and scaling of containerized applications. Most important among these is now Kubernetes, a fast-evolving, practical, and performant open source solution with a large and growing community, widely supported on both public and private clouds.
- Microservice-based application architectures, which enable real-time, horizontal scaling of granular, containerized application components based on dynamically-changing performance requirements, and support high availability via component redundancy and load balancing.
- Platform-as-a-Service, Serverless computing, and similar paradigms, which exploit containers, container orchestrators, and service APIs to conceal platform and cloud complexity and let developers focus on applications.
More than the Infrastructure-as-a-Service (IaaS) technologies preceding (and, in some cases, hosting) them, Kubernetes-based container and microservice strategies are now positioned to play a more and more important role in enabling some of the most-critical affordances on which enterprise hybrid cloud success depends. These include:
Real, dependable, and rapid workload portability. Standalone containerized workloads can be deployed instantly to any (same version) Kubernetes cluster located anywhere (i.e., on private or public cloud(s)); as can more-complex, multi-container applications (so long as these are composed to use well-understood and widely supported techniques and tools for service discovery and dynamic self-configuration). This enables Ops to locate workloads optimally to meet requirements for security, performance, latency, meet cost-optimization guidelines; or satisfy other business criteria.
Continuum of self-service capabilities across private and public platforms. Using container techniques to pre-build, standardize, and make development environments, tools, and components universally deployable across private and public Kubernetes clusters lowers Ops overhead, helps developers be more productive, eliminates many sources of QA problems, and supports modern, agile development processes such as CI/CD.
Rapid scaling and automated resiliency. Up to the limits of local cluster capacity, a Kubernetes implementation can easily scale out, load balance, and automate high availability of applications.
Multi-Cloud Operational Abstraction
However, Kubernetes in its simplest implementation — as separate clusters running on-premise IaaS or bare metal, or on various public cloud platforms — is still missing several affordances needed to fully realize hybrid cloud’s long-promised potential.
The first of these might be called ‘operational abstraction’ and is a way of reducing the significant new complexity and provider/platform-specific knowledge requirements of spinning up and lifecycle-managing individual Kubernetes clusters and groups of clusters, running on multiple public cloud platforms and private cloud infrastructure, each with its own operations tools, requirements, and configuration details.
SaaS managed solutions deliver this needed operational abstraction — through a ‘single pane of glass’ — enabling consumption of Kubernetes as a service in a way that’s infrastructure/provider agnostic: functioning equally well on private infra (e.g., as a companion to OpenStack on bare metal, or hosted on OpenStack), or on public clouds from Amazon (AWS), Microsoft (Azure), and Google (GCP). By lightly abstracting Kubernetes in this way, users enjoy one low complexity process model for operations; one set of compatible APIs for automation; and dependable, issue-free workload mobility, but without the heavy cost and flexibility downsides of a solution deployed at the customer premise (e.g., Red Hat OpenShift or Core OS Tectonic).
Resource Abstraction, Scaling, Bursting, and Availability
The second critical affordance for enabling a true hybrid cloud is resource abstraction: the ability (up to whatever point is practical given technical characteristics and operational requirements) to treat multiple clouds/clusters as a single pool of virtualized resources.
This is the province of Kubernetes Federation: a fast-evolving standard for placing multiple Kubernetes clusters running on disparate hosts under management by a specialized, federated control plane. Setting up a Kubernetes federation manually isn’t simple — a common, top-level DNS must be provided; naming conventions for member clusters and other entities are somewhat strict (enabling clusters and their components to be addressed using internet standards-compliant names); credentials must be collected and provided to the federation host; an admission controller, policy engine, and other common components (ConfigMap, DaemonSet, Autoscalers, ReplicaSets, and other constructs relevant also to individual Kubernetes clusters) must be configured, etc. Within the next several months, Platform9 plans to introduce the ability to rapidly configure, deploy, operate and lifecycle-manage Federated Kubernetes control planes across diverse public cloud hosts as well as private clouds.
Once a Kubernetes Federation is established, users gain a range of new and extremely powerful tools for consuming cloud resources rapidly and efficiently and automating a host of complex, intelligent operations with relatively low levels of effort. A single command and .yaml file let you define an application deployment on all the federation’s underlying clusters, which will allow them to collaborate and ensure that the required number of replicas are spread evenly across the clusters (unless configured otherwise) and kept alive. Updates can be propagated to deployments across all clusters, automatically.
Scaling in a Kubernetes Federation can be configured to respect, or effectively ignore cluster boundaries. Federated HPAs — Horizontal Pod Autoscalers — can be used to ensure that workloads in a federation-wide deployment are spun up, automatically, where required, and moved around to meet local load demands and configured policy objectives. This enables many kinds of automated and deliberate optimization long viewed as essential to a fully realized hybrid cloud model, including:
Automated inter-provider scaling and/or cost- (or performance-)optimized workload placement: For example, moving commodity workloads preferentially to reside on replicas on the public cloud host that currently has the most free reserve capacity, hence the lowest available costs; or placing workloads optimally for fastest response time/lowest latency to users.
“Bursting,” or automated scaling on demand: Often demonstrated in proofs-of-concept, but seldom in practical, generalized ways, Kubernetes Federation enables the use of public cloud resources (effectively limitless) to complement (always limited) private cloud capacity. Rather than tolerating the degradation of application availability under transient high load, apps can be configured to burst, via HPAs, from private cloud to public cloud; scaling out when demand is high, then scaling back when it tapers off.
High availability, made simpler: Federation offers a simple means for achieving arbitrarily high levels of reliability that “just works”: You can distribute an application’s workloads from private to public cloud; across multiple clusters in a single public cloud region; geographically-separate regions managed by a single provider; or across multiple public cloud providers’ resources; eliminating the risk of downtime due to infrastructure problems, local internet and provider backbone issues, and even regional disasters.
Hybrid Cloud’s Promise: Finally Delivered?
With the addition of Federation, Kubernetes — itself the new top of the modern enterprise cloud stack — is (at long last!) very close to delivering the full scope of benefits long promised by hybrid cloud strategy: agility, high levels of functional automation in operations, numerous avenues for cost optimization, and practical access both to commodified public cloud resources and to more secure (and, depending on the scale of use, often more cost-efficient) private cloud capacity.
Opinions expressed by DZone contributors are their own.