I would expect a general solution for the verification of all docker images build. And it should work across different projects. This means less time and effort. Certainly, save money!
With endless development, we need to rebuild Docker images occasionally by doing the following:
- Installing new packages.
- Updating existing packages to a new version.
- Reconfiguring or tuning services via config files or similar ways.
Rebuilding old images may fail in many ways:
- Unexpected manual steps. It would be frustrating if we can’t build images purely from Dockerfile.
- Package installation with latest versions may incur incompatible issues.
- Package installation with specific versions fail. They might have been marked as obsoleted in the repo.
- Outages of dependent services, etc.
See more about this in this article.
I choose to use Docker-in-Docker (DIND) to verify image builds. Why? Yes, I know many people are strongly against DIND. (Like this one.)
With my years’ experience, I’m confident that DIND is very capable for the following scenario:
- Super easy to set up and run. Literally, we only need to use “Docker run” and then perform tests.
- Leave no trace or garbage in Docker host. If using a shared host, people’s daily work might be disturbed by our tests.
For your convenience, I’ve published a Docker image (see Dockerfile). It has Docker daemon pre-installed.
# Pull docker image docker pull denny/docker:v1.0 # Start a container to build images inside docker run -t -d --privileged -h dockerimages \ --name docker-images denny/docker:v1.0 /usr/bin/dockerd # Login to the container docker exec -it docker-images bash # Download a dockerfile for test purpose wget -O /root/java_v1_0.dockerfile \ https://raw.githubusercontent.com/DennyZhang/devops_docker_image/tag_v2/java/java_v1_0.dockerfile # Build docker image inside current container cd /root/ docker build -f java_v1_0.dockerfile -t denny/java:v1.0 --rm=true . # Verify docker images docker images | grep denny/java
Suggestions for Running DIND
- Don’t use devicemapper as your storage driver. By default, containers can only have 10 GB disks. Yes, we could mount volume for big folders. Still, inconvenient.
- Stop containers first, before decommission. At the end of testing, make sure we stop containers first, then stop Docker and finally destroy the environment.
- When building multiple images, the sequence matters. Image B might depend on image A. If image A is public, do we still need to care about the test sequence? The answer is yes. We need to build B based on latest A, instead of the version in Docker hub.
- Speed up image build by loading frequently used golden images first. The pre-loading not only saves us lots of time, but also the false negatives of network turbulence.
# Export docker image docker save ubuntu:14.04 > ubuntu_1404.tar.gz # Import docker image docker load -i ubuntu_1404.tar.gz
What About Sensitive Files?
Wrap up everything as a Jenkins job.