Platinum Partner
java,devops,tips and tricks

Parameterized Docker Containers

I’ve been hacking a lot on Docker at Zapier lately, and one of the things I found to be somewhat cumbersome with Docker containers is that it seemed to be a little difficult to customize published containers without extending them and modifying files within them or some other mechanism. What I have come to discover is that you can publish containers that are customizable without modification from the end user by utilizing one of the most important concepts from 12 factor application development: Store Configuration in the Environment.

Let’s use a really good example of this, the docker-registry application used to host docker images internally. When docker first came out I whipped up a puppet manifest to configure this bad boy but then realized that the right way would be to run this as a container (which was published). Unfortunately, the Dockerfile as it was didn’t fit my needs.

FROM ubuntu

RUN sed 's/main$/main universe/' -i /etc/apt/sources.list && apt-get update
RUN apt-get install -y git-core python-pip build-essential python-dev libevent1-dev -y
ADD . /docker-registry

RUN cd /docker-registry && pip install -r requirements.txt
RUN cp --no-clobber /docker-registry/config_sample.yml /docker-registry/config.yml

EXPOSE 5000

CMD cd /docker-registry && gunicorn --access-logfile - --debug --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b 0.0.0.0:5000 -w 2 wsgi:application

The gunicorn setup was hard-coded and, to make matters more complicated, the configuration defaulted strictly to the development-based configuration that stored images in /tmp vs. the recommended production setting that stored images in S3 (where I wanted them).

The solution was easy: create a couple bash script that utilized environment variables that could be set when calling "docker run".

First, we generate the configuration file:

#!/bin/bash

if [ "$SETTINGS_FLAVOR" = "prod" ] ; then
    config=$(<config.yml); 
    config=${config//s3_access_key: REPLACEME/s3_access_key: $AWS_ACCESS_KEY_ID};
    config=${config//s3_secret_key: REPLACEME/s3_secret_key: $AWS_SECRET_KEY};
    config=${config//s3_bucket: REPLACEME/s3_bucket: $S3_BUCKET};
    config=${config//secret_key: REPLACEME/secret_key: $WORKER_SECRET_KEY};
    printf '%s\n' "$config" >config.yml
fi

And wrap the gunicorn run call:

#!/bin/bash
if [[ -z "$GUNICORN_WORKERS" ]] ; then
    GUNICORN_WORKERS=4
fi

if [[ -z "$REGISTRY_PORT" ]] ; then
    REGISTRY_PORT=5000
fi


cd "$(dirname $0)"
gunicorn --access-logfile - --debug --max-requests 100 --graceful-timeout 3600 -t 3600 -k gevent -b 0.0.0.0:$REGISTRY_PORT -w $GUNICORN_WORKERS wsgi:application

Finally, the Dockerfile is modified to call these scripts with CMD, meaning that they are called when the container starts.

FROM ubuntu

RUN sed -i 's/main$/main universe/' /etc/apt/sources.list && apt-get update
RUN apt-get install -y git-core python-pip build-essential python-dev libevent1-dev -y
ADD . /docker-registry

RUN cd /docker-registry && pip install -r requirements.txt
RUN cp --no-clobber /docker-registry/config_sample.yml /docker-registry/config.yml
RUN sed -i "s/ secret_key: REPLACEME/ secret_key: $(< /dev/urandom tr -dc A-Za-z0-9 | head -c 32)/" /docker-registry/config.yml

EXPOSE 5000

CMD cd /docker-registry && ./setup-configs.sh && ./run.sh

Since we use puppet-docker, the manifest for our dockerregistry server role simply sets these environment variables when it runs the container to configure it to our liking.

class zapier::role::dockerregistry(
  $environment='test',
  $aws_access_key_id,
  $aws_secret_access_key,
  $s3_bucket,
  $worker_secret_key
){
  class { '::docker':
    tcp_bind => 'tcp://127.0.0.1:4243'
  }
  
  Docker::Image { 
    require => Class['docker'],
  }

  docker::image { 'base':}

  docker::run { 'docker-registry':
    image   => 'samalba/docker-registry',
    ports   => ['5000:5000'],
    command => '',
    env     => [
      "SETTINGS_FLAVOR=$environment",
      "AWS_ACCESS_KEY_ID=$aws_access_key_id",
      "AWS_SECRET_KEY=$aws_secret_access_key",
      "S3_BUCKET=$s3_bucket",
      "WORKER_SECRET_KEY=$worker_secret_key"
    ],
    require => Class['docker'],
  }

}

I’m a really big fan of this concept. This means people can publish Docker containers that can be used as standalone application appliances with users tweaking to their liking via environment variables.

EDIT: Although I used Puppet in this example to run docker, you don’t need to. You can easily do the following as well.

docker run -e SETTINGS_FLAVOR=prod -e AWS_ACCESS_KEY_ID=$aws_access_key -e AWS_SECRET_KEY=$aws_secret_key -e S3_BUCKET=$s3_bucket -e WORKER_SECRET_KEY=$worker_key -p 5000:5000 -m 0 samalba/docker-registry


Published at DZone with permission of {{ articles[0].authors[0].realName }}, DZone MVB. (source)

Opinions expressed by DZone contributors are their own.

{{ tag }}, {{tag}},

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}
{{ parent.authors[0].realName || parent.author}}

{{ parent.authors[0].tagline || parent.tagline }}

{{ parent.views }} ViewsClicks
Tweet

{{parent.nComments}}