Making Docker Apps for Stackato
Making Docker Apps for Stackato
Learn how to create Docker-ready applications that run in Stackato, including the basics of getting started with Docker, pushing Docker images, and connecting services.
Join the DZone community and get the full member experience.Join For Free
See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity.
With the Stackato 3.6 release, we introduced the ability to deploy Docker images to Stackato. This provides an alternative to the usual method of application deployment used by Cloud Foundry and Heroku, which is driven by buildpacks.
What we haven't covered here before is how to create "Docker-ized" applications that will run in Stackato and take advantage of the provisioned services. We'll look into this here by taking one of our favorite Stackato app samples and converting it into a Docker image.
Starting with a New "Stack"
One of the great things about Docker is that you can run different flavors of Linux without having to worry too much about the OS of the host system. Stackato is based on Ubuntu 12.04 LTS, as is the stackato/stack-alsek base image used for staged applications, but there's nothing stopping us from running Docker images based on a different distro.
I've been experimenting a bit with Debian, CentOS, and Alpine Linux images, but for this we'll start with the opensuse base image.
With Docker, you can compartmentalize system configuration into separate Docker layers, either by breaking up configuration steps into separate RUN directives in a single Dockerfile, or by creating separate images and chaining them together with the FROM directive at the top of each separate Dockerfile. We'll do a bit of both with this example.
Updating and Installing OS Packages
I'm a bit of a stickler for keeping my OS up-to-date, so the first thing I do in my Dockerfile is to run an update using the OS's package manager. On openSUSE, this is done with
zypper. After some experimentation (the
-i option for
docker run is crucial for this) I came up with a Dockerfile that looks like this:
FROM opensuse:13.2 MAINTAINER Troy Topnik <firstname.lastname@example.org> RUN \ zypper -n up \ && zypper -n install python python-pip ca-certificates-mozilla \ vim git curl wget
The openSUSE base image is intentionally minimal (more so than most, but that's not necessarily a bad thing). As you can see, I took the liberty of installing:
- python: pretty crucial for a Python web app
- python-pip: for installing my Python dependencies
- ca-certificates-mozilla: needed for pip to use SSL
- vim: because we'll need an editor while experimenting
- git: for cloning repos on the fly or pip installation from git sources
- curl: for general HTTP testing
- wget: for miscellaneous fetching of potentially useful things
With these tools installed, I have most of the things I might need to script the installation of a python application or try things out interactively in a locally running Docker container. It also gives me some tools I might need for troubleshooting an app running in Stackato (through a
stackato ssh session).
Notice that I chained a couple of commands together in a RUN statement. Every time I use a RUN command in a Dockerfile, I get another image layer, and there's a hard limit of 127 layers in the current versions of Docker (the AUFS default limit). Bundling related commands together reduces the number of layers we create, and helps ensure we don't hit the ceiling.
If we've got lots of headroom, there might be some value in having the update and package install steps in separate RUN commands to take advantage of Docker's ability to use cached layers when rebuilding, which can speed up the build a lot. There's definitely more than one way to do it, but guidelines for the best way to stack Docker layers is covered extensively elsewhere.
Let's build what we have so far and can get on with the next Dockerfile which packages our app.
Building, Testing, and Pushing Docker Images
I decided to name my new "base+1" image layer 'troytop/opensuse-python'. I built it by running this command in an empty directory containing my Dockerfile:
$ sudo docker build --no-cache -t troytop/opensuse-python .
--no-cache in this case because I want to be sure the
zypper update runs each time I build, rather than relying on the cached image layer from previous builds. When the build completes I usually have a look around in an interactive shell to verify that things are working as I expect:
$ sudo docker run -i -t troytop/opensuse-python /bin/sh sh-4.2# pip -V pip 1.5 from /usr/lib/python2.7/site-packages (python 2.7) sh-4.2# exit
If I'm going to be sharing this with anyone else, I should push it to the Docker Hub or some other registry server:
$ sudo docker push troytop/opensuse-python
The Next Layer: bottle-currency-suse
The Stackato-Apps/bottle-currency app is one of my favorite demo apps for Stackato. Billy Tat has already made a Docker version of this based on ubuntu:12.04, so I can adapt the Dockerfile he created to work with my opensuse-python image.
Here's a breakdown:
FROM troytop/opensuse-python MAINTAINER Troy Topnik <email@example.com>
Nothing surprising here. Using a different container as a starting point and identifying myself as the maintainer.
RUN useradd -d /home/www -m -s /bin/bash www USER www WORKDIR /home/www
Here I create a new user called 'www' and set the active user and the working directory. If we don't *have* to run our app as root, we shouldn't. A lot of Docker web apps don't bother with this step (looking at you Billy!), and in the context of Docker this might be a bit paranoid, but I think we should apply the same best practices here as we would running the server on a VM or bare metal.
COPY . .
My Dockerfile will exist in the base directory of the bottle-currency sources, so this command will recursively copy the contents of the current directory into the WORKDIR defined above.
RUN pip install --user -r requirements.txt
This installs the modules required by the application. With a staged application, this part would be handled by the buildpack.
Our app defaults to running on port 8080 if it doesn't see a $PORT environment variable, so we EXPOSE that port in the container with this command so that it's routed to an external port on the DEA host. The Stackato router will in turn be able to route to the app by forwarding external requests to this hostname:port.
When running a staged app in Stackato, the application code or the Procfile will have to reference the $PORT variable that Stackato provides in the container. With Docker Apps we just have to make sure there's only one port exposed.
CMD python wsgi.py
Finally, we specify the command that starts the web process. In the buildpack world, this would be specified in a Procfile.
With the Dockerfile ready, we repeat the same steps we did with the opensuse-python image to build, test, and push the image. The testing step is slightly different as we probably want to look at this with a browser. To do this we forward the container's port 8080 to port 8000 on the host:
$ sudo docker run -p 8000:8080 -t troytop/bottle-currency-suse
Assuming we're doing this all on localhost, connecting to http://localhost:8000 with a browser will show the running web app.
It won't work just yet. If you're trying this yourself, you'll see "An error occurred while contacting the server" and the conversion app won't actually function. That's because the app is looking for a Redis data service. There are a few ways to link Docker containers to other Docker containers providing databases, but we can handle this once it's deployed to Stackato. For now, we'll push our new image to the Docker Hub:
$ sudo docker push troytop/bottle-currency-suse
Attaching to Services
The bottle-currency app looks for a Redis data service exposed by a REDIS_URL environment variable. Stackato injects environment variables (VCAP_SERVICES, _URL, etc. ) into the Docker containers to provide connection information for any services that have been bound to the app. This is the 12-factor way of doing things, and it's the way staged applications have always worked in Cloud Foundry.
So, to deploy our app to Stackato:
$ stackato push -n --docker-image troytop/bottle-currency-suse --mem 128 --as docker-currency …
This fetches the docker image from the Hub and deploys it (without a database yet). Next, we create a new Redis data service and bind it to the app:
$ stackato create-service --plan free redis currency-data docker-currency …
This step will recreate the container with the necessary REDIS_URL variable in it.
With your own code, you can find a number of ways to parse these environment variables and connect to the data services, but if you're trying to deploy something that someone *else* has already packaged as a Docker image, you might have to find creative ways to reformat the credentials provided by Stackato into variables that the app understands. The way to do this is to add another layer on top of the third-party image which parses the Stackato-provided environment variables, reformats them in a way the app will understand, and reiterates the CMD / ENTRYPOINT line from the original image.
But What Do I Know...
I may have done this all wrong. The more I read about optimizing Docker images, managing change in image layers, and writing good Dockerfiles; the more I realize that I have a lot to learn. The best practices for packaging applications for containers are still emerging, but I can already see huge benefits to the declarative, linear, compartmentalized, approach to packaging and configuration provided by Docker.
Running Docker images in the context of Platform-as-a-Service is brand new territory, but it's ready for exploration.
Published at DZone with permission of Troy Topnik , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.