OpenShift is RedHat's container application platform. It allows developers to quickly develop, host, and scale applications in a cloud environment, providing an integrated set of tools for managing your container-based applications — everything from deployment to container repositories to access control, to built-in metrics and monitoring services. OpenShift is also available in an open source distribution called OpenShift Origin.
With the launch of Version 3, OpenShift is now built around Kubernetes, the cluster manager released by Google — not surprising, since RedHat has been one of the main contributors to the open source Kubernetes project from its initial release. As regular readers of this blog will know, we recently released a module for managing Kubernetes resources (like Pods, Replication Controllers and Services) using Puppet. In this blog post, I'll look at how you can use that module to power your OpenShift-based PaaS.
There are several ways of getting an OpenShift cluster up and running, depending on your requirements. You can opt for the managed services from RedHat (running in either the public cloud or your own data center) or look to run OpenShift Origin yourself. The OpenShift getting started documentation contains lots of helpful advice for administrators.
Assuming you don’t already have access to an installation of OpenShift v3, the fastest route I’ve found to trying it out is using the local Vagrant VM.
$ vagrant init thesteve0/openshift-origin $ vagrant up ... ==> default: You can now access OpenShift console on: https://10.2.2.2:8443/console ==> default: ==> default: To use OpenShift CLI, run: ==> default: $ vagrant ssh ==> default: $ sudo -i ==> default: $ oc status ==> default: $ oc whoami ==> default: ==> default: If you have the oc client library on your host, you can also login from your host. ==> default: $ oc login https://10.2.2.2:8443
Note that the above will download the very latest version of OpenShift, and it will take a little time. You can check it worked by hitting the console URL in your browser.
With OpenShift running, it’s useful to install the
oc CLI tool locally so we can interact with it. You can download the relevant package for your operating system from the GitHub release page.
In order to use the CLI, you need to authenticate. For this we can use the
oc login command mentioned in the
vagrant up output.
oc login https://10.2.2.2:8443
This will prompt you for a username and password. For the purposes of this demo, we require an admin user, so use the username admin and the password admin. With all that set up, you should be able to use the
oc tool to interact with your OpenShift cluster.
$ oc version oc v1.1.3 kubernetes v1.2.0-origin $ oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-yc9e8 1/1 Running 1 6d router-1-mm44t 1/1 Running 4 6d
If you're familiar with Kubernetes, you’ll probably recognize the last command as the output from
kubectl. Note that
oc acts as a proxy for
kubectl, so the commands you expect to work — like
get rc or
delete pods — should all be present and correct.
Using Puppet With OpenShift
With OpenShift set up, let's look at using the Kubernetes module to create and manage an application. We’ll use the canonical guestbook Kubernetes example for this. For a more detailed look at the Puppet code for this example, you can see the detailed walkthrough we published earlier.
First install the Kubernetes Puppet module as per the instructions in the module's README. Once that’s done, remember to copy the configuration file generated by
oc login above into the Puppet config directory. The exact directory will vary depending on your installation of Puppet, but you can find the correct directory with the following command:
puppet config print confdir
You’ll most likely be running one of the following two commands to copy the configuration file into the right place.
cp ~/.kube/config ~/.puppetlabs/etc/puppet/kubernetes.conf cp ~/.kube/config /etc/puppetlabs/puppet/kubernetes.conf
With that out of the way, let’s download the Puppet code for the example.
git clone email@example.com:garethr/garethr-kubernetes.git cd garethr-kubernetes/examples
Take a look at the
guestbook.pp file. You should see the Puppet Kubernetes types used to describe the various pods, services and controllers which make up the guestbook application. With that in place, let’s use Puppet to run the examples. For the purposes of this demo we’ll use
apply, but you could also use
agent here to ensure that changes over time are managed.
$ puppet apply guestbook.pp --test Info: Loading facts Notice: Compiled catalog for macbook.local in environment production in 0.72 seconds Info: Applying configuration version '1458811754' Info: Checking if redis-master exists Info: Creating kubernetes_replication_controller redis-master Notice: /Stage[main]/Main/Kubernetes_replication_controller[redis-master]/ensure: created Info: Checking if redis-master exists Info: Creating kubernetes_service redis-master Notice: /Stage[main]/Main/Kubernetes_service[redis-master]/ensure: created Info: Checking if redis-slave exists Info: Creating kubernetes_replication_controller redis-slave Notice: /Stage[main]/Main/Kubernetes_replication_controller[redis-slave]/ensure: created Info: Checking if redis-slave exists Info: Creating kubernetes_service redis-slave Notice: /Stage[main]/Main/Kubernetes_service[redis-slave]/ensure: created Info: Checking if frontend exists Info: Creating kubernetes_replication_controller frontend Notice: /Stage[main]/Main/Kubernetes_replication_controller[frontend]/ensure: created Info: Checking if frontend exists Info: Creating kubernetes_service frontend Notice: /Stage[main]/Main/Kubernetes_service[frontend]/ensure: created Notice: Applied catalog in 3.39 seconds
According to the Puppet output, the various pods, services and controllers have been created. We can use the
oc tool to take a closer look.
$ oc get pods NAME READY STATUS RESTARTS AGE docker-registry-1-yc9e8 1/1 Running 1 6d frontend-dzb79 1/1 Running 0 1m frontend-giv96 1/1 Running 0 1m frontend-l2z9x 1/1 Running 0 1m redis-master-x3hl7 1/1 Running 0 1m redis-slave-blx0y 1/1 Running 0 1m redis-slave-l6fau 1/1 Running 0 1m router-1-mm44t 1/1 Running 4 6d
Exposing the Service Using the OpenShift Console
The guestbook is now running on OpenShift, but how do we access it externally? For that, let’s take a look at the rather nice OpenShift console. If you’re using the Vagrant setup above, the console should be available on https://10.2.2.2:8443/. Log in again with the username admin and the password admin. Select the default project, and you should be presented with an overview showing the various resources.
Find the frontend service in the overview list. It should indicate that it is associated with the frontend controller and has three pods running. Look for the Create Route link to the right of the service name. Click on Create Route and follow the instructions. You'll see there are defaults set — leave all of these as they are, and hit Create. This should return you to the overview page.
The frontend service should now have a URL associated with it: http://frontend-default.apps.10.2.2.2.xip.io/. (xip.io is a magic domain name that provides a wildcard DNS for any IP address. This address is pointing back to your OpenShift cluster.)
Accessing that URL should load the guestbook application we launched using Puppet.
At the excellent KubeCon conference a few weeks ago in London, I spoke about the potential for higher level interfaces for Kubernetes. A big part of that potential comes from compatibility between different Kubernetes-based services, and from these platforms building higher-level interfaces without limiting access to lower-level ones. OpenShift is a great example of this in practice. As a result of that design, you can use Puppet to manage your Kubernetes resources running in OpenShift.
Gareth Rushgrove is a senior software engineer at Puppet.