There are a lot of exciting tools in the infrastructure and virtualization space that have emerged in the last couple of years. Ansible and Docker are probably two of the most exciting ones in my opinion. While I’ve already used Ansible extensively, I’ve only started to use Docker, so that’s the big caveat emptor with regards to the contents of this post.
What’s Docker and why should I care?
Docker describes itself as a “container platform”, which at a first glance can be easily confused with a VM. Wikipedia describes Docker containers in the following way:
Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting virtual machines.
My take on where Docker is useful and not useful:
1. As a “process”/application container
This may be especially useful where you are running applications or processes that should be isolated for security purposes, but can share powerful hardware without the overhead of a full VM and where the potential CPU or memory contention of a container is not an issue (on underutilised hardware, this is potentially even desirable to not have a bunch of tin standing around mostly idling).
Though the real power in Dockers “container” and “image” concept is that it makes it trivial to share an application container with all its moving parts preconfigured easily as code (see Dockerfiles. Dealing with infrastructure as code and sharing it effectively between teams could be really disruptive when it comes to breaking down barriers to cross-team collaboration in larger organizations.
2. As a disposable sandboxed environment to run user inititiated, potentially dangerous processes or data
This is actually where Docker comes in for my day-to-day use on our Code Qualified service for automated programmer testing, which we can use as an example - when users submit their solutions to programming problems, the code has to be run, analysed and checked for correctness. However, a malicious user could potentially submit malicious code to Code Qualifieds servers to try to execute just about any nasty, destructive operation. This is where Docker comes in: we run submitted code on temporary Docker containers, which means at worst they will destroy a temporary container that is only intended to live for a few minutes and score a big zero on the test they have “solved”.
Docker as an app container
3. Do NOT try to use Docker as a VM replacement or to run a “entire systems”
Finally, we get to what not to do with Docker: don’t try to run it as a full OS/system of sorts. It is not what it is meant for. As Phusion have pointed out, the base Docker images available lack a number of important system settings and services, so trying to run Docker containers as a substitute for a “real” system/VM can be fraught with potential issues.
I can run commands in my Dockerfile, why use Ansible?
You can run arbitrary commands in a Dockerfile that builds a Docker image, among them apt-get if you are building an image based on Ubuntu. So why is Ansible relevant?
1. Bootstraping Docker containers
I knocked together a very simple example on GitHub that uses Ansible to set up a Vagrant VM that then runs Docker containers where the images are build in part by.. Ansible (the Vagrant part isn’t relevant if you’re on Linux. I used it to set up a VM to run Docker on as OS X doesn’t support Docker natively).
While it’s entirely possible to script everything in the Dockerfile with the RUN directive, I find Ansible scripts useful for a couple of reasons:
- Ansible scripts are portable. I can test them on a Vagrant VM, on an AWS EC2 instance, on a “real” Linux machine or on Docker.
- Ansible scripts are idempotent: if you run them again later against a container/VM/machine, they can act as a test that the box is in fact properly set up (and fix anything that isn’t).
- Ansible scripts can provision multiple hosts concurrently: this is what your Ansible inventory-file is for.
To run Ansible while building a Docker image, you can create the following stub inventory file:
Then make your Dockerfile look something like this (where you have an Ansible script called “provision.yml” in the same directory, in this case likely installing nginx and setting it up correctly):
FROM ubuntu:14.04.1 MAINTAINER Wille Faler "firstname.lastname@example.org" RUN apt-get update RUN apt-get install -y software-properties-common RUN apt-add-repository ppa:ansible/ansible RUN apt-get update RUN apt-get install -y ansible ADD inventory-file /etc/ansible/hosts ADD provision.yml provision.yml RUN ansible-playbook provision.yml -c local RUN echo "daemon off;" >> /etc/nginx/nginx.conf EXPOSE 80 CMD ["nginx"]
2. Setting up and coordinating Docker hosts
Ansible has a Docker module, which you can use to build, run, start, stop, link and coordinate your Docker containers and images with in various ways. This is where Ansible really shines, and it will almost always be preferable to handcrafted, fragile shell-scripts.
In fairness, I haven’t really used the Docker module that much in anger yet, though I suspect I will use it extensively eventually. These are the sort of tasks where Ansible is really brilliant even without Docker in the picture, so I wouldn’t expect that to change one bit with Docker. In fact I suspect Ansible will become even more integral to running infrastructure when you have multiple hosts running multiple Docker containers.
What else is there?
I have only really scratched the surface here, but it is a brief summary of my understanding so far of how tools like Ansible and Docker fit into the infrastructure eco-system - I expect my understanding and views to evolve over time as I get deeper into it.
I haven’t even started looking at things like Core OS, that together with etcd and fleet could prove to be interesting and potentially valuable building blocks, but I’ll leave that for another day, when I have had the time to explore it more deeply.