Over a million developers have joined DZone.

Solving Docker Resource Leaks

Using Docker? Finding yourself constantly running out of disk space? Here are a few tricks you can use to solve your Docker-releated problems.

· Cloud Zone

Build fast, scale big with MongoDB Atlas, a hosted service for the leading NoSQL database on AWS. Try it now! Brought to you in partnership with MongoDB.

We changed daily CI tests from VM to Docker early last year. It is just awesome! Way faster and more cost-effective.

But one annoying thing keeps dragging us down. The Docker daemon server runs low on disk capacity quite often.


 Just for deployment CI, we run below the routine test cases:

  1. All-in-one deployment.
  2. Standard three-node cluster and six-node cluster deployment.
  3. Customized cluster deployment.
  4. DB and application HA deployment tests.
  5. Continuous upgrade test.
  6. Sandbox Test by Docker-in-Docker.

All test cases apply to both Docker and DigitalOcean. Docker is the default driver for sure. Consequently, two potential unpleasant issues happen:

  1. The Docker machine runs into OOM (Out-of-memory) when heavy tests run simultaneously. Then we have to reboot the machine by force. This leaves garbage files.
  2. Container removal fails in the Docker-in-Docker scenario. Again the failure of resource cleanup costs.

Eventually, we get alerts of low disk space over and over. What to do?

Remove all Useless Resources

A very common suggestion. Yes, remove all unused containers, images, and volumes.

# Remove exited or dead containers
docker ps --filter status=dead \
    --filter status=exited -aq \
    | xargs -r docker rm -v

# Remove unused docker images
docker rmi $(docker images | grep "<none>"\
    | awk -F' ' '{print $3}')

# Remove orphaned docker volumes
docker volume rm \
    $(docker volume ls -qf dangling=true)

Literally speaking, we're facing a resource leak. This tip helps, but unfortunately, it doesn't solve the problem.

Keep Docker up to Date

Every day, we see more and more exciting news or improvements of Docker. The newer version is definitely more capable of handling this resource reclaim issue.

Install the latest Docker:

curl -SSL https://get.docker.com/ | sudo sh

Upgrade Docker to the given version. Use this at your own risk! Docker might fail to start.

# Install resources
wget -qO- https://get.docker.com/ | sh
# Show version which are available
apt-cache showpkg docker-engine
# Install given version of docker
apt-get install docker-engine=1.12.1-0~trusty
# Prevent upgrade on sys upgrade
apt-mark hold docker-engine
docker version

Rebuild the Docker Environment from Scratch

We're using aufs for docker storage, which is quite common. The folders growing in size are:

  • /var/lib/docker/aufs/diff
  • /var/lib/docker/aufs/mnt

Running the du command, we know our mnt directory takes 18GB and the diff directory takes 85GB. A reasonable estimation says it should be fewer than 10 GB. No doubt some folders could be deleted to reclaim capacity. But how to selectively remove folders? Before docker 1.10, we could map the container ID to subdirectories under diff. Now it's much more complicated! Even if we can, it's better not to. I have no idea how Docker will evolve in the future.

Examining our CI case carefully, we noticed that there is actually only one important container. It runs Jenkins + facilities (Kitchen, Chef, ssh scripts, etc). Any other containers are disposable. Another approach: What if we re-install Docker to have a fresh restart?

  • Firstly export unrecoverable data from the container.
# Caculate disk usage of containers. 
# Note this do take time!
docker ps -s

# Or run du command inside containers.
du -h -d 1 /

To export valuable data, we don't want to handle all the tricky application backup or redo manual steps. Thus, we simply export the container to an archived file. To keep the file as small as possible, remember to clean up inside the container before export.

docker export $container_id > $container.tar

  • Soft delete /var/lib/docker and reinstall the Docker daemon.
  • Recreate the container and restore it to its original state.
docker import $container.tar

The whole process might take a while. For your reference, our Jenkins container takes up 14GB on disk. Docker export takes 5 minutes, and Docker import takes 10 minutes.


  1. Keep docker up-to-date.
  2. When the ordinary trick doesn't help solve the disk resource leak, try to rebuild the Docker daemon with a proper export/import.

Now it's easier than ever to get started with MongoDB, the database that allows startups and enterprises alike to rapidly build planet-scale apps. Introducing MongoDB Atlas, the official hosted service for the database on AWS. Try it now! Brought to you in partnership with MongoDB.


Published at DZone with permission of Denny Zhang, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}