Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Deploy Kubernetes Anywhere

DZone's Guide to

Deploy Kubernetes Anywhere

As you move into cloud-native development and deployment, let's look at the role Kubernetes plays and how you can get more versatility from your orchestration.

· Cloud Zone
Free Resource

Are you joining the containers revolution? Start leveraging container management using Platform9's ultimate guide to Kubernetes deployment.

Containers provide the ability to decouple an application and its dependencies from the operating system. By not packaging the operating system in the same way as virtual machine images, containers can save a significant amount of system resources: compute, memory, and disk space. Containers are also faster to download, update, deploy, and iterate. Consequently, in the world of technology, containers have caused a mini-revolution and have been adopted by companies such as Google, Microsoft, and Amazon.

The containers mini-revolution has also brought about fierce competition to fulfill the need for container orchestration and management. Kubernetes, Google’s open-source container orchestrator, has emerged as the leading solution (with alternatives such as Amazon ECS and Docker Swarm), thanks to three main reasons:

  • Cloud-native design: enable deploying and running next-generation applications
  • Open-source nature: innovate fast and avoid vendor lock-in
  • Portability: deploy anywhere, whether in the cloud, on-premises, in a VM, etc.

The following figure shows the role Kubernetes can play in your cloud-native deployments:


Kubernetes Container Orchestration

As you can see, Kubernetes can deploy and manage your containerized applications, which include NGINX, MySQL, Apache, and many others. It can provide placement, scaling, replication, monitoring, and other capabilities for containers.

Once you select your container orchestration platform, the next step is to deploy Kubernetes. As mentioned previously, Kubernetes is a portable solution. Because Kubernetes uses the same images and configuration, it works exactly the same way on your laptop, in the cloud, or on-premises.

1. Kubernetes-as-a-Service

These solutions offer the ability to deploy Kubernetes in a variety of infrastructure: public clouds or on-premises. Advantages of choosing this approach for Kubernetes clusters include:

  1. Upgrades, monitoring, and support through the KaaS provider
  2. Easy expansion for hybrid-cloud or multi-cloud environments
  3. Single pane view of multiple clusters
  4. Highly available, multi-master Kubernetes clusters that are automatically scaled-up and scaled-down based on workloads
  5. Common enterprise integrations such as SSO / isolated namespaces; and the ability to deploy applications via Helm charts
  6. Cluster federation that provides a truly seamless hybrid environment across multiple clouds or datacenters.

Kubernetes-as-a-Service

Examples of Kubernetes-as-a-Service solutions include Platform9 and StackPoint.io.

2. Hosted Infrastructure

Google Cloud Platform and Microsoft Azure provide Kubernetes through Google Container Engine (GKE) and Azure Container Service (ACS) respectively. Placing containers in the public cloud can get you started quickly, but your data will now reside outside the network perimeter and firewall.

Google’s GKE leads other public cloud vendors. Google has been using containers extensively for internal projects through a cluster manager called Borg, and has over a decade of experience to tap into (source: TheNextPlatform). In contrast, Microsoft’s ACS is a much younger offering, and Kubernetes support was introduced only in February 2017. However, ACS provides flexibility: users get the ability to choose a container orchestration platform (Kubernetes, Docker Swarm, DCOS), and the option to deploy containerized applications on Windows in addition to Linux. As shown below, GKE and ACS are completely based out of the public cloud, and the Kubernetes service and infrastructure are deployed and managed by the hosting provider.


Hosted Infrastructure for Kubernetes

3. Local Deployment

Minikube is the most popular way to deploy Kubernetes locally. It comes with support for a variety of hypervisors, including VirtualBox, VMware Fusion, KVM, and xhyve, and OSs, including OSX, Windows, and Linux. The following illustration describes a Minikube deployment further:

Deployment with Minikube

As shown above, the user interacts with this laptop deployment using both the Minikube CLI and Kubectl, the native CLI for Kubernetes. The Minikube CLI can be used to start, stop, delete, obtain status, and perform other actions on the virtual machine. The Kubectl CLI performs actions on the Kubernetes cluster once the Minikube virtual machine has been started. The following commands start an existing Minikube virtual machine and create a NGINX Kubernetes deployment:

#  minikube start
# cat > example.yaml<<EOF
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
EOF
# kubectl create -f example.yaml

Rundown

Kubernetes-as-a-service, Kubernetes hosted infrastructure, and Minikube are only three ways to deploy Kubernetes. Please refer to the following deployment guide for a detailed analysis of various deployment models, considerations, pros/cons, and head-to-head comparisons.

Next steps

Using Containers? Read our Kubernetes Comparison eBook to learn the positives and negatives of Kubernetes, Mesos, Docker Swarm and EC2 Container Services.

Topics:
cloud ,kubernetes ,cloud native development ,container orchestration ,tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}