Container Image Authenticity
There are plenty of Docker images and repositories on the Internet for every type of application under the sun, but if you are pulling images without using any trust and authenticity mechanism, you are basically running arbitrary software on your systems.
- Where did the image come from?
- Do you trust the image creator? Which security policies are they using?
- Do you have objective cryptographic proof that the author is actually that person?
- How do you know nobody has been tampering with the image after you pulled it?
Docker will let you pull and run anything you throw at it by default, so encapsulation won't save you from this. Even if you only consume your own custom images, you want to make sure nobody inside the organization is able to tamper with an image. The solution usually boils down to the classical PKI-based chain of trust.
Best practices:
- The regular Internet common sense: do not run unverified software from sources that you don’t explicitly trust.
- Deploy a container-centric trust server using some of the Docker registry servers.
- Enforce mandatory signature verification for any image that is going to be pulled or run on your systems.
Example: Deploying a full-blown trust server is beyond the scope of this card, but you can start signing your images right away.
- Get a Docker Hub account if you don’t have one already.
- Create a directory containing the following trivial Dockerfile:
# cat Dockerfile
FROM alpine:latest
Build the image:
# docker build -t /alpineunsigned .
Log into your Docker Hub account and submit the image:
Enable Docker trust enforcement:
# docker login
[…]
# docker push <youruser>/alpineunsigned:latest
Enable Docker trust enforcement:
# export DOCKER_CONTENT_TRUST=1
Now, try to retrieve the image you just uploaded:
# docker pull /alpineunsigned
You should receive the following error message:
Using default tag: latest
Error: remote trust data does not exist for docker.io/<youruser>/alpineunsigned:
notary.docker.io does not have trust data for docker.io/<youruser>/alpineunsigned
Now that DOCKER_CONTENT_TRUST is enabled, you can build the container again and it will be signed by default.
# docker build --disable-content-trust=false -t /alpinesigned:latest .
Now, you should be able to push and pull the signed container without any security warning. The first time you push a trusted image, Docker will create a root key for you and you will also need a repository key for the image. Both will prompt you for a user-defined password.
Your private keys are in the ~/.docker/trust directory; safeguard and back them up.
The DOCKER_CONTENT_TRUST is just an environment variable and will die with your shell session. But trust validation should be implemented across the entire process, from the images building and the images hosting in the registry to images execution in the nodes.
Docker Credentials and Secrets
Your software needs sensitive information to run: user password hashes, server-side certificates, encryption keys, etc. This situation is made worse by the nature of containers; you don’t just “set up a server” — there’s a large number of distributed containers that may be constantly created and destroyed. You need an automatic and secure process to share this sensitive info.
Best practices:
- Do not use environment variables for secrets; this is a very common yet very insecure practice.
- Do not embed any secrets in the container image. Read this IBM post-mortem report: “The private key and the certificate were mistakenly left inside the container image.”
- Deploy a Docker credentials management software if your deployments get complex enough. Do not attempt to create your own "secrets storage" (curl-ing from a secrets server, mounting volumes, etc.) unless you really know what you are doing.
Examples: First, let’s see how to capture an environment variable:
# docker run -it -e password='S3cr3tp4ssw0rd' alpine sh
/ # env | grep pass
password=S3cr3tp4ssw0rd
It's that simple, even if you su to a regular user:
/ # su user
/ $ env | grep pass
password=S3cr3tp4ssw0rd
Nowadays, container orchestration systems offer some basic secret management. For example, Kubernetes has the secrets resource. Docker Swarm has also its own secrets feature, which will be quickly demonstrated here.
Initialize a new Docker Swarm (you may want to do this on a VM):
# docker swarm init --advertise-addr
Create a file with some random text, your secret:
# cat secret.txt
This is my secret
Create a new secret resource from this file:
# docker secret create somesecret secret.txt
Create a Docker Swarm service with access to this secret; you can modify the uid, gid, mode, etc:
# docker service create --name nginx --secret source=somesecret,target=somesecret,mode=0400 nginx
Log into the Nginx container; you will be able to use the secret:
root@3989dd5f7426:/# cat /run/secrets/somesecret
This is my secret
root@3989dd5f7426:/# ls /run/secrets/somesecret
-r-------- 1 root root 19 Aug 28 16:45 /run/secrets/somesecret
This is a minimal proof of concept. At the very least, now your secrets are properly stored and can be revoked or rotated from a central point of authority.
Container Resource Abuse
Containers are much more numerous than virtual machines on average. They are lightweight and you can spawn big clusters of them on modest hardware. That’s definitely an advantage, but it implies that a lot of software entities are competing for the host resources. Software bugs, design miscalculations, or a deliberate malware attack can easily cause a Denial of Service if you don’t properly configure resource limits.
To add to the problem, there are several different resources to safeguard: CPU, main memory, storage capacity, network bandwidth, I/O bandwidth, swapping… there are some kernel resources that are not so evident, and even more obscure resources such as user IDs (UIDs) exist.
Best practices: Limits on these resources are disabled by default on most containerization systems; configuring them before deploying to production is basically a must. There are three fundamental steps:
- Use the resource limitation features bundled with the Linux kernel and/or the containerization solution.
- Try to replicate the production loads on pre-production. Some people use synthetic stress tests, and others choose to "replay" the actual real-time production traffic. Load testing is vital to knowing where the physical limits are and where your normal range of operations is.
- Implement Docker monitoring and alerting. You don’t want to hit the wall if there is a resource abuse problem. Malicious or not, you need to set thresholds and be warned before it’s too late.
Example: Control groups, or cgroups, are a feature of the Linux kernel that allow you to limit the access processes and containers have to system resources. We can configure some limits directly from the Docker command line:
# docker run -it --memory=2G --memory-swap=3G ubuntu bash
This will limit the container to 2GB main memory, 3GB total (main + swap). To check that this is working, we can run a load simulator; for example, the stress program present in the Ubuntu repositories:
root@e05a311b401e:/# stress -m 4 --vm-bytes 8G
You will see a "FAILED" notification from the stress output. If you tail the syslog on the hosting machine, you will be able to read something similar to:
Aug 15 12:09:03 host kernel: [1340695.340552] Memory cgroup out of memory: Kill process 22607 (stress) score 210 or sacrifice child
Aug 15 12:09:03 host kernel: [1340695.340556] Killed process 22607 (stress) total-vm:8396092kB, anon-rss:363184kB, file-rss:176kB, shmem-rss:0kB
Using Docker stats, you can check current memory usage and limits. If you are using Kubernetes, you can actually book the resources that your application needs to run properly and define maximum limits using requests and limits on each pod definition:
[...]
- name: wp
image: wordpress
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
[...]
Static Vulnerability Scanning
Containers are isolated black boxes: if they are doing their work as expected, it’s easy to forget which software and version is specifically running inside. Maybe a container is performing like a charm from the operational point of view, but it’s running version X.Y.Z of the web server, which happens to suffer from a critical security flaw. This flaw was fixed long ago upstream, but not in your local image. This kind of problem can go unnoticed for a long time if you don’t take the appropriate measures.
Best practices: Picturing the containers as immutable atomic units is really nice for architecture design, but from the security perspective, you need to regularly inspect their contents:
- Update and rebuild your images periodically to grab the newest security patches. Of course, you will also need a pre-production testbench to make sure these updates are not breaking production.
Live-patching containers is usually considered a bad practice. The pattern is to rebuild the entire image with each update. Docker has declarative, efficient, easy-to-understand build systems, so this is easier than it may sound at first.
- Use software from a distributor that guarantees security updates. Anything you install manually out of the distro, you have to manage security patching yourself.
- Docker and microservice-based approaches consider progressively rolling over updates without disrupting uptime a fundamental requisite of their model.
- User data is clearly separated from the images, making this whole process safer.
- Keep it simple. Minimal systems expect less frequent updates. Remember the intro: less software and moving parts equals less attack surface and updating headaches. Try to split your containers if they get too complex.
- Use a vulnerability scanner. There are plenty out there, both free and commercial. Try to stay up-to-date on the security issues of the software you use subscribing to the mailing lists, alert services, etc.
- Integrate this vulnerability scanner as a mandatory step of your CI/CD and automate where possible; don’t just manually check the images now and then.
Example: There are multiple Docker images registry services that offer image scanning. For this example, we decided to use CoreOS Quay, which uses the open-source Docker security image scanner Clair. Quay is a commercial platform but some services are free to use. You can create a personal trial account by following these instructions.
Once you have your account, go to Account Settings and set a new password (you need this to create repos).
Click on the + symbol on your top right and create a new public repo:

We go for an empty repository here, but you have several other options, as you can see in the image above.
Now, from the command line, we log into the Quay registry and push a local image:
# docker login quay.io
# docker push quay.io/<your_quay_user>/<your_quay_image>:<tag>
Once the image is uploaded into the repo, you can click on its ID and inspect the image security scan, ordered by severity, with the associated CVE report link and upstream patched package versions.

{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}
{{ parent.urlSource.name }}