Auto-Scaling and Auto-Healing Kubernetes in a Hybrid Environment Using Cloudify

DZone 's Guide to

Auto-Scaling and Auto-Healing Kubernetes in a Hybrid Environment Using Cloudify

Follow this guide to set up Kubernetes to scale and heal itself while using Cloudify.

· Cloud Zone ·
Free Resource

OpenStack | Cloud Orchestration | Kubernetes | Kubernetes Cluster | Hybrid Cloud | Cloud Automation | Container Management

At OpenStack Israel 2016, I participated in a presentation where we compared a few cloud orchestrators, among them Kubernetes and Cloudify. In short, I presented Kubernetes as a container-focused orchestrator, while Cloudify I presented as a more general orchestrator. The division isn't exact. Kubernetes has a lot of integration with infrastructure. And Cloudify has been dabbling in containers for several years.

I was talking about core focus.

At OpenStack Austin in April, we presented (video below) the first Cloudify Kubernetes Plugin and hybrid cloud example. It deploys a Kubernetes Cluster and allows you to package Kubernetes configurations with Cloudify. The example we provided included the deployment of a regular MongoDB on a VM and a NodeJS app managed by Kubernetes.

This was a good start. It showed that it was possible to manage Kubernetes from Cloudify. However, managing Kubernetes with Cloudify is generally not a typical use case for developers. Kubernetes is successful because it makes managing containers easy.

Cloudify is good at what it does because it makes integrating various systems easy. We want Kubernetes to manage containers, and Cloudify to tie it in with other environments, including non-virtualized, stateful services or legacy apps; in other words, what we call a hybrid stack.

The best place I thought to start was to have Cloudify manage the underlying infrastructure. I recently adapted the Cloudify Kubernetes Plugin to fit what I've described; a Kubernetes Cluster running in an environment managed by Cloudify. It is a Cloudify blueprint that defines the following processes:

  1. Build OpenStack Infrastructure: VMs, Security Groups, etc.
  2. Deploy a Kubernetes Cluster (one master, two or more nodes).
  3. Auto-scaling of the VMs that the nodes run on, plus the nodes themselves.
  4. Auto-healing of the VMs that the nodes run on, plus the nodes themselves.

After deploying this, you have a Kubernetes Cluster that can work with any configuration that you send to Kubernetes. When Cloudify senses that the nodes are not sufficient for the workload, it will add VMs to Openstack, install the Kubernetes nodes on them, and add the nodes to the Kubernetes master.

Example auto-scaling code:

Example auto-healing code:

This is a starting point for my ideal hybrid cloud orchestration model — multiple orchestration technologies working side by side, across multiple environments. Each orchestrator is focused on what it does best.

The next step in this project is to see Cloudify become more aware of Kubernetes' needs. This will require new modules in Kubernetes as well as new features to the existing Cloudify infrastructure plugins.

Cloudify will react to environmental needs with feedback from technologies like Kubernetes, resulting in highly intelligent self-running code.

auto-scaling, cloud, cloudify, hybrid, infrastructure, kubernetes, orchestration, plugins

Published at DZone with permission of Luther Trammell , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}