Over a million developers have joined DZone.

Making Lightweight Docker Images

DZone's Guide to

Making Lightweight Docker Images

It turns out, Docker images are, for the most part, BIG. This is the story of how the Ops team at Librato is making them smaller.

· Cloud Zone
Free Resource

Download this eBook outlining the critical components of success for SaaS companies - and the new rules you need to play by.  Brought to you in partnership with NuoDB.


It’s often said that containers are “lightweight”. When I heard this initially, I incorrectly assumed we were talking about file-system footprint, because the sheer size of a VM Image on disk is consequential; it places limits on my infrastructure design choices. Docker containers are designed to run single processes; a tantalizing detail from which I further incorrectly inferred some sort of magical, dependency-free process isolation with a featherweight footprint. In real life though, most of the Docker containers we use are about the same size (in megabytes) as their VM counterparts, and “lightweight” refers to the comparatively light processing overhead as compared to that incurred by VMs when they emulate a hardware layer.

It turns out, Docker images are, for the most part, BIG. This is the story of how the Ops team at Librato is making them smaller.

No Country for Linked Binaries

The thing is, you can't just throw a program into a process-isolated jail and expect it to work. Most of the computer programs we run everyday are dynamically linked. Like Master Pandemonium, our programs are missing critical pieces of themselves. Pieces they need to live. Pieces that the system's linker normally bolts on to them at runtime. But in process isolation, our programs can't see the rest of the filesystem; they have no access to the library files that would fill the holes in their soul. The linker cannot help them, and so they flail momentarily, and then violently perish.

Like the moon men and Sandy the Squirrel, if our program is to survive process isolation, it needs to bring with it its own air, water, and food. Two ways we can achieve this are to compile a static binary (i.e., bypass the linker by compiling together all of the pieces of our program into one large binary), or provide our process with its own chroot. In other words: figure what it needs (every library it's linked to, and every file it depends on), and copy it all into the image along with the thing we actually want to run.

It's not easy to figure out just what exactly something like Nginx needs though, and no human alive is even capable of predicting what some random Ruby script needs. Computers powerful enough to model fluid dynamics spend thousands of milliseconds trying to resolve the dependencies necessary to run a Ruby script. As a result, we usually take a more expedient third path: we just copy the entire Ubuntu filesystem into them minus what you'd find in /boot, /dev/, /proc/ and friends. Our images usually wind up in the 500MB-1GB range.

Who Cares About Smaller Docker Images?

Indeed, the entire population of Docker devotees interested in the file size of images would have no trouble getting a walk-in lunch table at 12:30 in any restaurant on Market street. As I'm continually reminded by people younger than me: *shrug* disk is cheap.

Fair enough, but when I started playing with Docker to explore how we might use it to refine our deployment pipeline at Librato, I found some interesting patterns were rendered impractical by the sheer size of the images. Say we wanted to run a local, s3-backed registry on every individual node instead of a central registry. In theory, this removes a network dependency (no central registry), while making sure every node has access to the same images (everyone points at the same s3 bucket).

In practice, however, this means copying down over 500MB just to launch the local registry (the registry is itself a Docker image), and then downloading and running whatever actual images you need to launch your app from the s3-backed local registry.

If you have a passing familiarity with Docker, you know that these images are composed of layers, and Docker relies on this property to minimize copying all the unnecessary stuff by only copying the layers that don't already reside on your localhost. In other words, the bargain is that you only have to copy the "heavy stuff" once: the first time. After that, pulling a new version of the image is comparatively free.

If you run ephemeral infrastructure though, like we do at Librato, you'll often create new instances to scale for demand, or perform an automated break/fix. This means that other than deploys (depending on how you deploy), every time you docker run an image, it'll be the first time, and you'll pay the entire transfer tax. I haven't run the numbers on our infrastructure, but shooting from the hip I can tell you, that tax is... consequential. This is, I suspect, a not-often spoken-of justification for the popularity of Docker on Bare-Metal.

So I Wrote a Shell Script...

Ignoring for the moment whether my desire for smaller images is irrationally ataxophobic, is there actually a way to make these things smaller? There are, in fact, a few tools out there today that can help.Dockerize will take a simple binary like wuftpd or Nginx and create a teensy container that contains just the binary and all of the libraries to which it is dynamically linked. But what if you want to run a Java or Python or Ruby script inside a Docker container? These runtime environments are complex, self-referential and sprawling. They're not what Dockerize was designed for.

If you ask around, you might discover that a few people are using buildroot (sigh, everything old is new again), which is a series of makefiles intended to build small embedded Linux systems from scratch. This is a bit unwieldy but you can effectively build small base images this way. At the end of the day though, your image will still have a bunch of files in it that have nothing to do with your runtime other than having been necessary to build your runtime.

But in Docker, these things are layers, right? Every time we install something in a Docker container, and then commit it, Docker creates a new layer for us. So if we start with a base image, and install Java on it and commit the result, Docker has already effectively isolated a Java runtime for us in a layer. All we need to do is extract that layer, and then copy all the libraries that the Java binaries link against from the parent image, and we should have a functional, minimal, cruft-free, Java runtime image.

So I wrote a shell script that helps you extract these layers, resolve and copy their lib dependencies and commit the result into a new image. It's called Skinnywhale, and so far, it's working pretty great for me, so I thought you might like to check it out too.

How Does It Work?

Let's create a java-runtime image together. You begin as you normally would, with a base image like "ubuntu" (Skinnywhale will work with any kind of base-image). Just run the image and install whatever you want on it like you normally would.

#download and run the ubuntu docker image
sudo docker run -ti ubuntu
#install java
apt-get update
apt-get install -y software-properties-common
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get -y install oracle-java8-installer
sudo rm -Rf /var/cache

Then, exit the container, and without committing your changes, run Skinnywhale with the ID of the container you just ran. You can copy the container ID from the bash-prompt or from Docker's ps command after the fact:

docker ps -a

There are a few environment variables Skinnywhale listens to. Setting DEBUG will turn on verbose output, while BRUTELIB and BRUTEUSRLIB will brute-force copy over the entire contents of /lib and /usr/librespectively from the parent image. When you're ready, run Skinnywhale with your image ID like so:

skinnywhale 8efbc5497abb

At this point, Skinnywhale will make directories in /tmp for your parent image, and your change layer. It then walks the directory tree of your changes, making a list of all the files that are dynamically linked to something. Then, for each file in that list, it runs ldd, and makes a uniqued list of each dependency. Finally, it copies each dependency from the parent image to your changes directory, and uses tar piped to docker import to inject the result back into Docker as a new image.

Depending on the runtime you're trying to isolate, you may see some errors and/or warnings from Skinnywhale about unresolved dependencies. This means that some of the files in the runtime you've isolated, literally just don't exist on the system you installed it on. For example, you'll see a lot of warnings trying to isolate the Java runtime, because Java is a binary distribution that comes with a lot of files that are linked to the system's X11 libraries, and the whole of X11 doesn't exist on server images intended for IaaS and PaaS environments like Docker's Ubuntu image. Isn't software awful? These generally aren't a problem, e.g.,if they don’t prevent you from running java on ubuntu, they won't prevent you from running Java under a Skinnywhale-extracted image). As long as you see an ASCII starving whale at the end of the run, Skinnywhale was successful:

--- Skinny whale is positively starving ---

                    ##        .
              ## ## ##       ==
           ## ## ## ##      ===
       /""""""""""""""""\___/ ===
      / rX
  ~~~{ /\ ~ ~~~ ~~~~ ~~ ~ /  ===- ~~~

Aww, poor thing, it’s ribs are showing and everything. Anyway, at this point you should see a new image prefaced with skinny- in your images list:

docker images

Oh by the way, you might also run into issues with programs that use dlopen(), because Skinnywhale can't detect these dependencies (it would literally need to parse the source code). If you aren't familiar with it, the dlopen() function, like goto, and the void operator in javascript, was created by haters to thwart the noble pursuits of good people like you and me. Java therefore unsurprisingly uses dlopen() in a few contexts, including apparently the use of dlopen() to manually load and interact with the system resolver libraries. So if you're having DNS-related trouble running your java program under a Skinnywhale-isolated runtime container, try re-creating your image with BRUTELIB set.

Copying Over Your Script

Now that you have a nice minimalist runtime image, you can copy your code into it using either docker cp, or with a docker build file like this one:

FROM skinny_8efbc5497abb
ADDmyJavaProgram.jar /

And now you should be all set to run it:

docker run --net=host myProggy java -jar /myJavaProgram.jar

Good Luck!

Skinnywhale began life as a Librato hack day project. You can read more about how we run our awesome and fun-filled hack days here. I sincerely hope you find it useful and would love your feedback about it. I would especially appreciate negative feedback about why this is a silly useless tool because I've fundamentally misunderstood how Docker is supposed to work. Nothing would please me more than discovering that there was a magical means of creating tiny runtime Docker images. Compelling arguments about why I shouldn’t care either are also welcome. Good luck! 

Learn how moving from a traditional, on-premises delivery model to a cloud-based, software-as-a-service (SaaS) strategy is a high-stakes, bet-the-company game for independent software vendors. Brought to you in partnership with NuoDB.

docker ,cloud

Published at DZone with permission of Dave Josephsen, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}