Over a million developers have joined DZone.

Docker and Cloud Foundry

DZone's Guide to

Docker and Cloud Foundry

· Cloud Zone ·
Free Resource

Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.

 In this post I will give an overview of where you might find Docker integration within the Cloud Foundry ecosystem. We will look at Decker, Stackato and Diego.


Last week I was lucky to make it to the London PaaS User Group meetup. There I met Colin Humphreys of CloudCredo and saw him give a demo of Decker.

Decker is a prototype that Colin has built to use Docker as a backend to Cloud Foundry. Decker implements the DEA API, so it is a drop-in component to Cloud Foundry. This is similar to how other 3rd party DEAs, such as Iron Foundry, have been implemented.

Cloud Foundry's HTTP API plus NATS message bus protocol enables anyone to write a plug-in component to stage and run application instances. This also means you can run applications on top of non-Linux platforms, such as Windows, while still running Cloud Foundry on Linux - just a long as the 3rd party DEA talks to the rest of the system using the Cloud Foundry DEA protocol. This is what Colin has done with Decker.

Colin Humphreys - Decker When you wish to deploy an application to Docker, you specify --stack decker with your cf push command. The Decker DEA advertises that it supports this stack and so the Cloud Controller will direct the application instances there.

Decker currently works with Dockerfiles, rather than with built Docker images. A Dockerfile is a single file that lists all the commands that are run to build a Docker image.

When cf push is run, the Dockerfile is uploaded to the Cloud Controller. The Cloud Controller selects Colin's Decker DEA for deployment, due to the --stack decker being specified.

As Colin admits, the staging process of the Decker prototype is a bit of cheat right now and does not not actually do much in the way of staging. Currently, Decker will simply persist the Dockerfile in the droplet and leave the building of the Docker image until runtime. With each instance of the droplet that is deployed, Decker will extract the Dockerfile from the droplet, build the Docker image and then run the built Docker image as a new container.

Obviously, building new Docker images at runtime is inefficient and may lead to "snowflake" instances if any of the dependencies change or anything unusual happens at Docker image build time. As Colin mentioned, this is an MVP (minimal viable product) and this will be addressed in future iterations.

Security is currently a concern with allowing Cloud Foundry users to deploy any Docker image of their choosing. When you are allowed to build your own images it is quite easy to allow the user of that container to run as root. cgroups' user-namespacing is based on the user id of the host system. Users inside the container will be given a unique id that does not exist outside the container. Unfortunately, the root user has id of 0 both inside and outside of the container, so it is the same user both inside and outside of the container. If a user becomes root inside the container, then there is potential for them to break out of the container and gain access to the host system.


Last December, ActiveState released Stackato 3.0 in which we replaced our own LXC implementation with Docker. We had several years experience of working with LXC and cgroups and analyzed dotCloud's open-sourced implementation, called Docker. We liked the way they had implemented it and saw Docker's obvious potential in the future of PaaS, but also knew the line we needed to draw around our integration. Docker was young, not recommended for production, but the basics of provisioning LXC containers was solid.

Stackato 3.0 Therefore, in Stackato 3.0 the integration with Docker was minimal. We replaced Stackato's singular LXC template with a single Docker base image. The provisioned containers did not have sudo access (by default) and there was no way for the application developer to specify how the Docker image was built in terms of Docker functionality. As far as Stackato was concerned, fence (Stackato's LXC container manager) was still just provisioning basic LXC containers. The difference was that the LXC provisioning now had ten thousand eyeballs on it and ActiveState had laid the groundwork for Stackato to be the enterprise-grade Docker PaaS when Docker reaches maturity.

Application staging in Stackato 3.0, similar to Cloud Foundry v2, became buildpack centric. To the end-user this may not have been entirely obviously since we now have built-in "legacy buildpacks" which replicate the behavior of Stackato 2.10's (and prior) resident support for runtimes and frameworks.

When Stackato's container management daemon, fence, provisions the LXC container via the base Docker image, the LXC container is used to build up the stack using buildpacks and staging hooks. Droplets are then extracted from the LXC container in the same way as is was in pre-3.0 Stackato.

It is possible for a Stackato administrator to change the base Docker image that fence uses with a few kato commands.

$ kato config get fence docker/image stackato/stack/alsek $ kato config set fence docker/image exampleco/newimg exampleco/newimg - See more at: http://www.activestate.com/blog/2014/04/docker-and-cloud-foundry#sthash.UPNuyBcy.dpuf

We still have not gone "fully Docker" for good reasons. It still is not safe for us to let users deploy Docker images where they may be able to gain root access to the container and subsequently to the host system. This has been solved at the LXC/cgroups level and as we speak I am sure somebody is working on the Docker implementation, but it is not available yet. We are also maintaining Ubuntu LTS and supported kernels, so we are waiting for the all the stars to align.


Diego is a new component of Cloud Foundry which aims to re-architect the way that staging and deployment is managed. This essentially replaces the DEA with something that should be more extensible across a variety of runtime environments.

For a long time Cloud Foundry has used Warden for its container management. Similar to Docker, or Stackato's original LXC implementation, Warden is based on LXC and cgroups. With Diego, Warden becomes Garden.

So where does Docker fit into Diego?

The integration is minimal and you will not find Docker in a deployed Cloud Foundry cluster. Docker is currently only used to generate the LXC template that Garden then uses. Docker is simply a build-time tool for the Cloud Foundry release.

Although, it is possible that Diego could open up the way to provide an optional Docker backend.

Diego has 3 components - the Stager, the Smelter and the Executor. First the Stager, which runs on Linux, sets up the job for smelting. The Smelter, running on the target platform, creates the application droplet. It is then the job of the Executor, which also runs on the target platform, to run the droplet as a running application.

The Smelter and Executor, which run on the target platform, provide a way to support any backend, whether it be Linux, Windows or other. The Linux backend is provided by Garden (formerly Warden). It should be possible to have another, albeit redundant, Linux backend, such as Docker.


There is great potential for Docker and Cloud Foundry collaboration. Integration has been proved with 3 different projects. Decker, Stackato and Diego each taking a different approach to the integration points. Obviously Diego's integration currently lives outside of a built Cloud Foundry release, but there is further potential for providing Docker integration via the Smelter and Executor.

Colin Humphreys made an interested point at the London PaaS User Group last week. He said that he thinks that the current model of PaaS should be split so that we have another layer. This layer he calls CaaS, or Containers-as-a-Service. This idea is the motivation behind Decker.

How closely integrated should a PaaS be with its containerization implementation? Should Cloud Foundry users be able to easily plug in and out different container managers, such as switching out Warden for Docker? What do we lose from the overhead of decoupling these? What do we gain?

Join us in exploring application and infrastructure changes required for running scalable, observable, and portable apps on Kubernetes.


Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}