Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Cloudify Meets Kubernetes—Container Management and Orchestration on Bare Metal

DZone's Guide to

Cloudify Meets Kubernetes—Container Management and Orchestration on Bare Metal

Learn about how for heterogeneous environments, possibly including Kubernetes, Cloudify provides a solution that can orchestrate everything under one umbrella.

· Cloud Zone
Free Resource

Deploy and scale data-rich applications in minutes and with ease. Mesosphere DC/OS includes everything you need to elastically run containerized apps and data services in production.

Cloudify lives at the extreme end of the "unopinionated" spectrum of application orchestration tools. Kubernetes (http://www.kubernetes.io), on the other hand, is a container orchestration system that is very opinionated. For those committed to a container based deployment architecture, it's a great choice, especially for supporting microservices, a good reference on this can be found at: http://martinfowler.com/articles/microservices.html.

For those with a heterogenous environment, possibly including Kubernetes, Cloudify provides a solution that can orchestrate everything under one umbrella. (You can also check out the Cloudify team's excellent talk from OpenStack Vancouver on just this topic).

This post reviews some recent work I've done to provide a means for Cloudify to manage Kubernetes clusters at a high level as part of a heterogenous environment.

Integration Overview

Perhaps the most fundamental integration with Kubernetes would be to assume an already existing cluster, and simply connect to it and issue commands. While this is a valid use case, it was a little too basic for my taste. So the initial ambition was to use Cloudify to install a Kubernetes cluster on bare metal (or bare VMs in my case).

The approach was to create a Cloudify plugin that defined a couple of types that represented the basic components of a Kubernetes cluster: a master node and a minion node. The master host in Kubernetes is equivalent to the manager host in Cloudify. Minions manage the container lifecycle on hosts across the cluster. Since Google provides handy docker images for the various component services of Kubernetes, I used these and automated Google's instructions for setting up a multi-node cluster.

So in a kind of odd twist, Cloudify is orchestrating docker containers to enable a system that orchestrates docker containers. The main difference between Cloudify's "normal" container orchestration and the approach described here, is that each individual container isn't a blueprint node. The blueprint nodes represent the hosts only. Once Kubernetes is up and running, it's on it's own, at least in this initial bare metal version.

Running It

This initial version only supports Ubuntu (only tested on Ubuntu 14), and assumes that docker is preinstalled and running. It also assume that Python and apt-get is installed, has internet access, and has passwordless ssh and sudo setup. Grab the source at: (http://github.com/dfilppi/cfy3/kub).

This post was written for release 1.4. Edit the "barevm-blueprint" and put in your IP addresses and ssh info.
Install a Cloudify CLI, and run: cfy local install-plugins -p barevm-blueprint.yaml
cfy local init -p barevm-blueprint.yaml
cfy local execute -w install
Enjoy your new Kubernetes cluster.

Plugin Design

The initial plugin design is quite simple, and only defines two node types, master and minion, and a relationship. Currently, since the initial support is for bare metal, IP addresses and ports are encoded directly into the blueprint. Since the Kubernetes cluster is being treated as a separate entity, agentless orchestration is used via the Fabric plugin. This was deemed appropriate because Kubernetes provides its own container management and scaling capabilities. A nice side effect is that the blueprint can be run easily inlocal mode. It currently only stands up a Kubernetes cluster, it does not have logic to tear it down or run related workflows yet.

Implementation Details

The current implementation utilizes the Cloudify Fabric (ssh) plugin, and essentially boils down to automating the steps outlined in Google's documentation for multi-node docker-based installation. The example blueprint is very concise and can be executed directly from the Cloudify local mode (without starting a Cloudify manager).

IPs are specified, since the cluster is being constructed on existing running hosts (running Ubuntu 14.04). Each of the node types has a Fabric task script that is run to set up its particular kind of host (master or minion/node). The custom relationship merely passes the IP address and port from the master to the minion for use in its setup. The ssh_username and key are passed along to Fabric so it can connect and run commands and move files.

Limitations and Next Steps

This is a simple first step. Over time more features and use cases will be added.

  • More OS support (CoreOS particularly)
  • Custom workflows for performing lifecycle operations on the Kubernetes cluster (basically kubectl commands wrapped as workflows)
  • Full lifecycle support (uninstall)
  • Cloud blueprint with agents to take advantage of Cloudify's VM level auto-healing.

Kubernetes is a great system for managing containers and to some degree applications in those containers. Cloudify can be used to manage Kubernetes in a blended environment of containers, virtualization, Cloud platform, and bare metal.

Discover new technologies simplifying running containers and data services in production with this free eBook by O'Reilly. Courtesy of Mesosphere.

Topics:
cloudify ,docker ,kubernetes

Published at DZone with permission of Cloudify Community, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}