DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

SBOMs are essential to circumventing software supply chain attacks, and they provide visibility into various software components.

  1. DZone
  2. Refcards
  3. Docker Security
refcard cover
Refcard #258

Docker Security

Core Practices for Securing Containers

This Refcard will lay out the basics of container security, provide core practices for successful implementation, and also spell out some more advanced workflows. We split container security into three sections, covering what to do at each stage in your container security lifecycle.

Free PDF for Easy Reference
refcard cover

Written By

author avatar Boris Zaikin
Lead Solution Architect, CloudAstro GmBH
Table of Contents
► Introduction ► Top Three OWASP Rules for Container Security ► CI/CD and Pre-Deployment Security ► Runtime Container Security ► Incident Response ► Conclusion
Section 1

Introduction

Are containers insecure? Not at all. Features like process isolation with user namespaces, resource encapsulation with cgroups, immutable images, and shipping minimal software and dependencies reduce the attack vector by providing a great deal of protection. 

Container security tools are becoming hot topics in the modern IT space as early adoption fever is evolving into a mature ecosystem. Security is an unavoidable subject to address when planning to change how we architect our infrastructure. 

This Refcard will lay out the basics of container security, assess key challenges, provide hands-on experience with basic security options, and introduce some more advanced workflows. 

We’ll split container security into three sections, covering what to do at each step of your container security lifecycle:

  • CI/CD and pre-deployment security 
  • Runtime security 
  • Incident response and forensics 

Lastly, we will take a look at Docker and Kubernetes security principles, how cloud providers work with Docker containers, and what security options are available.  

Section 2

Top Three OWASP Rules for Container Security

Before we jump into the rules, let’s further describe OWASP.  OWASP (Open Web Application Security Project) is a non-profit community of security specialists. OWASP provides free security guidelines and recommendations. 

In this Refcard, we will introduce three OWASP rules that should be implemented in projects that operate via Docker containers, specifically: 

  1. Add –no-new-privileges flag 
  2. Limit capabilities (Grant only specific capabilities needed by a container) 
  3. Keep Host and Docker up to date 
Section 3

CI/CD and Pre-Deployment Security

Pre-deployment security in CI/CD is quite essential. You can detect vulnerabilities before they end up in your application with the following: 

Container Image Authenticity

There are plenty of Docker images and repositories on the Internet for every type of application under the sun, but if you are pulling images without using any trust and authenticity mechanism, you are basically running arbitrary software on your systems.  

You can use the following rules to prevent disasters:   

  • Where did the image come from? 
  • Do you trust the image creator? Which security policies are they using? 
  • Do you have objective cryptographic proof that the author is actually that person? 
  • How do you know the image was not tampered with after you pulled it? 

Docker will let you pull and run anything you throw at it by default, so encapsulation won't save you from this. Even if you only consume your own custom images, you want to make sure nobody inside the organization is able to tamper with an image. The solution usually boils down to the classical PKI-based chain of trust.  

However, Docker and Docker Hub have integrated image vulnerability scanning. This feature is based on the Snyc tool and scans your images after pushing it into the hub. Below, you can see the image of the vulnerability scan report: 

Graphical user interface, application

Description automatically generated

Also, you can achieve the same result using the docker scan command to scan your local images. You can run the following example and see the standard report: 

Shell
docker scan backend-app:1.1.3 

However, if you need an extended report, you can run the following example: 

Shell
docker scan --file Dockerfile backend-app:1.1.3 

Core Practices: 

  • The regular Internet common sense: Do not run unverified software from sources that you don’t explicitly trust. 
  • Deploy a container-centric trust server using some of the Docker registry servers. 
  • Enforce mandatory signature verification for any image that is going to be pulled or run on your systems. 
  • You should disable image privilege escalation. You can do so by adding the flag --security-opt=no-new-privileges when running your images.  

If you run your Docker container in Kubernetes, you can add the allowPrivilegeEscalation flag: 

YAML
 
apiVersion: v1 
kind: Pod 
metadata: 
  name: backend-identity-service 
spec: 
  containers: 
  - name: backend-identity-service 
    image: path/to/docker/image/backend-identity-service:1.0.3 
    securityContext: 
allowPrivilegeEscalation: false 

Grant only specific capabilities needed for your containers. You can use the --cap-drop flag. Also, you should not run containers with --privileged flag! The best and recommended approach would be to drop all capabilities and then add one-by-one: 

Shell
docker run --cap-drop all --cap-add CHOWN backend-svc:1.0.3 

Example: Deploying a full-blown trust server is beyond the scope of this Refcard, but you can start signing your images right away. 

  1. Get a Docker Hub account if you don’t have one already. 
  2. Create a directory containing the following trivial Dockerfile: 

Enable Docker trust enforcement: 

Shell
# cat Dockerfile 
FROM alpine:latest 
 # export DOCKER_CONTENT_TRUST=1 

Now, try to retrieve the image you just uploaded: 

Shell
 
# cat Dockerfile 
FROM alpine:latest 

Using default tag: latest 
Error: remote trust data does not exist for docker.io/<youruser>/alpineunsigned: 
notary.docker.io does not have trust data for docker.io/<youruser>/alpineunsigned 

You should receive the following error message: 

Shell
Using default tag: latest 
Error: remote trust data does not exist for docker.io/<youruser>/alpineunsigned: 
notary.docker.io does not have trust data for docker.io/<youruser>/alpineunsigned 

Now that DOCKER_CONTENT_TRUST is enabled, you can build the container again and it will be signed by default: 

Shell
# docker build --disable-content-trust=false -t /alpinesigned:latest . 
# cat Dockerfile 
FROM alpine:latest 

Now, you should be able to push and pull the signed container without any security warning. The first time you push a trusted image, Docker will create a root key for you. You will also need a repository key for the image. Both will prompt you for a user-defined password. 

Your private keys are in the ~/.docker/trust directory; safeguard and back them up. 

The DOCKER_CONTENT_TRUST is just an environment variable and will die with your shell session. But trust validation should be implemented across the entire process, from the images building and the images hosting in the registry to image execution in the nodes. 

Docker Credentials and Secrets 

Your software needs sensitive information to run: user password hashes, server-side certificates, encryption keys, etc. This situation is made worse by the nature of containers; you don’t just “set up a server” — there’s a large number of distributed containers that may be constantly created and destroyed. You need an automatic and secure process to share this sensitive info. 

Core Practices: 

  • Do not use environment variables for secrets; this is a very common yet very insecure practice. 
  • Do not embed any secrets in the container image. For example, GitGuardian published a report about secrets in popular Docker images: “Actually, 7% of the images contained at least one secret.” 
  • Deploy a Docker credentials management software if your deployments get complex enough. Do not attempt to create your own "secrets storage" (curl-ing from a secrets server, mounting volumes, etc.) unless you really know what you are doing. 

Examples: First, let’s see how to capture an environment variable: 

Shell
# docker build --disable-content-trust=false -t /alpinesigned:latest . 
# cat Dockerfile 
FROM alpine:latest 


Shell
# docker run -it -e password='S3cr3tp4ssw0rd' alpine sh 
/ # env | grep pass 
password=S3cr3tp4ssw0rd 

It's that simple, even if you are a regular user: 

Shell
/ # su user 
/ $ env | grep pass 
password=S3cr3tp4ssw0rd 

Nowadays, container orchestration systems offer some basic secrets management. For example, Kubernetes has the secrets resource. Docker Swarm also has its own secrets feature, which will be quickly demonstrated below: 

Initialize a new Docker Swarm (you may want to do this on a VM): 

Shell
# docker swarm init --advertise-addr   

Create a file with some random text; this is your secret: 

Shell
# cat secret.txt 

This is my secret.

Create a new secret resource from this file: 

Shell
# docker secret create somesecret secret.txt 

Create a Docker Swarm service with access to this secret; you can modify the uid, gid, mode, etc.: 

Shell
# docker service create --name nginx --secret source=somesecret,target=somesecret,mode=0400 nginx 

Log into the Nginx container; you will be able to use the secret: 

Shell
root@3989dd5f7426:/# cat /run/secrets/somesecret 
# This is my secret 
root@3989dd5f7426:/# ls /run/secrets/somesecret 
-r-------- 1 root root 19 Aug 28 16:45 /run/secrets/somesecre 

This is a minimal proof of concept. At the very least, now your secrets are properly stored and can be revoked or rotated from a central point of authority. 

Hashicorp Vault Example 

You can use different secret providers. Hashicorp Vault allows you to store SSL Certificates, passwords, and tokens. It supports symmetric and asymmetric keys. To use it in Docker, you need to pull the official Hashicorp Vault image from the Docker Hub. Then you can run the Vault development server using the following command:  

Shell
$ docker run --cap-add=IPC_LOCK -d --name=dev-vault vault 

To run vault in server mode for non-dev environments, you can use the following command: 

Shell
$ docker run --cap-add=IPC_LOCK -d --name=dev-vault vault 

As an alternative to the Hashicorp Vault, you can use Azure Key Vault and environment variables to pass secrets to the Docker container. In AWS, you can use secrets manager. To pass parameters to a Docker container, you can use AWS SDK. For example: 

Shell
aws --profile <YOUR PROFILE FROM ~/.aws/credentials> --region <REGION eg. us-east-1> secretsmanager get-secret-value --secret-id <YOUR SECRET NAME> 

You can also use Kubernetes secrets and use it in the Docker container. You can achieve this by creating an environment variable with the secretKeyRef section. 

YAML
 
apiVersion: apps/v1 
kind: Deployment 
metadata: 
 name: back-end-project 
spec: 
 replicas: 1 
 selector: 
   matchLabels: 
     app: back-end-project 
 template: 
   spec: 
     containers: 
     - name: back-end-project 
       image: <path to docker container> 
       ports: 
       - containerPort: 80 
       env: 
       - name: SOME_ENV_VAR 
         valueFrom: 
           secretKeyRef: 
             name: secret-name 
             key: secret-key

Container Resource Abuse 

Containers are much more numerous than virtual machines on average. They are lightweight and you can spawn big clusters of them on modest hardware. Software bugs, design miscalculations, or a deliberate malware attack can easily cause a Denial of Service if you don’t properly configure resource limits. 

To add to the problem, there are several different resources to safeguard: CPU, main memory, storage capacity, network bandwidth, I/O bandwidth, swapping, etc. There are some kernel resources that are not so evident, and even more obscure resources such as user IDs (UIDs) exist. 

Core Practices: 

Limits on these resources are disabled by default on most containerization systems; configuring them before deploying to production is basically a must. There are three fundamental steps: 

  1. Use the resource limitation features bundled with the Linux kernel and/or the containerization solution. 
  2. Try to replicate the production loads on pre-production. Some people use synthetic stress tests, and others choose to "replay" the actual real-time production traffic. Load testing is vital to knowing where the physical limits are and where your normal range of operations is. 
  3. Implement Docker monitoring and alerting. You don’t want to hit the wall if there is a resource abuse problem. Malicious or not, you need to set thresholds and be warned before it’s too late. 

Example: Control groups, or cgroups, are a feature of the Linux kernel that allow you to limit the access processes and containers have to system resources. We can configure some limits directly from the Docker command line, like so: 

Shell
# docker run -it --memory=2G --memory-swap=3G ubuntu bash 

This will limit the container to 2GB main memory, 3GB total (main + swap). To check that this is working, we can run a load simulator; for example, the stress program present in the Ubuntu repositories: 

Shell
root@e05a311b401e:/# stress -m 4 --vm-bytes 8G 

You will see a "FAILED" notification from the stress output. If you tail the syslog on the hosting machine, you will be able to read something similar to: 

 
Aug 15 12:09:03 host kernel: [1340695.340552] Memory cgroup out of memory: Kill process 22607 (stress) score 210 or sacrifice child 
Aug 15 12:09:03 host kernel: [1340695.340556] Killed process 22607 (stress) total-vm:8396092kB, anon-rss:363184kB, file-rss:176kB, shmem-rss:0kB 

Using Docker stats, you can check current memory usage and limits. If you are using Kubernetes, you can actually book the resources that your application needs to run properly and define maximum limits using requests and limits on each pod definition. Figure 1 below demonstrates the algorithm: 

Figure 1 

Graphical user interface

Description automatically generated 

Read-Only Mode for Filesystem and Volumes 

You should enable Read-only mode to restrict usage of filesystem or volumes. To achieve this, you can use a --read-onlyflag. For example: 

Shell
docker run --read-only back-end 

You can also temporarily enable access if your app needs to save some data. You can use the flag --tmpfs: 

Shell
 docker run --read-only --tmpfs back-end 

Static Vulnerability Scanning 

Containers are isolated black boxes: If they are doing their work as expected, it’s easy to forget which software and version are specifically running inside. Maybe a container is performing like a charm from the operational point of view, but it’s running version X.Y.Z of the web server, which happens to suffer from a critical security flaw. This flaw was fixed long ago upstream, but not in your local image. This kind of problem can go unnoticed for a long time if you don’t take the appropriate measures. 

Core Practices: 

Picturing the containers as immutable atomic units is really nice for architecture design, but from the security perspective, you need to regularly inspect their contents: 

  • Update and rebuild your images periodically to grab the newest security patches. Of course, you will also need a pre-production testbench to make sure these updates are not breaking production. 
  • Live-patching containers is usually considered a bad practice. The pattern is to rebuild the entire image with each update. Docker has declarative, efficient, easy-to-understand build systems, so this is easier than it may sound at first. 
  • Use software from a distributor that guarantees security updates. Anything you install manually out of the distro, you have to manage security patching yourself. 
  • Docker and microservice-based approaches consider progressively rolling over updates without disrupting uptime, which is a fundamental requirement of their model. 
  • User data is clearly separated from the images, making this whole process safer. 
  • Keep it simple. Minimal systems expect less frequent updates. Remember the intro: Less software and moving parts equals less attack surface and updating headaches. Try to split your containers if they get too complex. 
  • Use a vulnerability scanner. There are plenty out there, both free and commercial. Try to stay up to date on the security issues of the software you use subscribing to the mailing lists, alert services, etc. 
  • Integrate this vulnerability scanner as a mandatory step of your CI/CD and automate where possible; don’t just manually check the images now and then. 

Example: There are multiple Docker image registry services that offer image scanning. For this example, we decided to use CoreOS Quay, which uses the open-source Docker security image scanner Clair. While Quay is a commercial platform, some services are free to use. You can create a personal trial account by following these instructions. 

Once you have your account, go to Account Settings and set a new password (you need this to create repos). 

Click on the + symbol on your top right and create a new public repo: 

Image title


We go for an empty repository here, but you have several other options, as you can see in the image above. 

Now, from the command line, we log into the Quay registry and push a local image: 

Shell
# docker login quay.io 
# docker push quay.io/<your_quay_user>/<your_quay_image>:<tag> 

Once the image is uploaded into the repo, you can click on its ID and inspect the image security scan, ordered by severity, with the associated CVE report link and upstream patched package versions. 

Image title 

You can find additional detail in the Lab 2 “Push the example container images to a container image registry.”  

Alternatively, you can use tools like: 

  • Harbor open-source registry with vulnerability scanning. Harbor is based on policy and RBAC access rules. Also, Harbor has an integration with other OIDC providers. 
  • Trivy is an open-source tool that checks your images, dockerfiles, and even IaC scripts (including Kubernetes and Terraform) for vulnerabilities. Trivy is quite easy to use. Install the binary file and run the following command: 
Shell
$ trivy image back-end-app:1.2.2 

If you use Kubernetes, you can combine one of these tools with kubebench. Therefore, you will cover not only the containers side of things but Kubernetes configuration security as well. 

Section 4

Runtime Container Security

Docker Intrusion Detection

In the previous sections, we covered the static aspects of Docker security: vulnerable kernels, unreliable base images, and capabilities that are granted or denied at launch time, etc. But what if, despite all these, the image has been compromised during runtime and starts to show suspicious activity? 

All the previously described static countermeasures do not cover all attack vectors. What if your own in-house application has a vulnerability? Or attackers are using a 0-day not detected by the scanning? Runtime security can be compared to Windows anti-virus scanning: Detect and prevent an existing break from further penetration. 

Core Practices: 

  • Do not use runtime protection as a replacement for any other static up-front security practices: Attack prevention is always preferable to attack detection. Use it as an extra layer of peace of mind. 
  • Having generous logs and events from your services and hosts — correctly stored, easily searchable, and correlated with any change you do — will help a lot when you have to do a post-mortem analysis. 

There are several tools that can help you to prevent this issue:  

Wazuh, a free open-source system that helps to detect and prevent threats, not only contains integrated intrusion detection but also log analysis, log data analytics, vulnerability detection, cloud security, and many other security features. Wazuh has an agent that is integrated into the Docker environment. The Agent helps to detect hidden files, unregistered network listeners, and cloaked processes. Wazuh contains the following components:  

  • Server 
  • Elastic stack  
  • Agent 

It also supports platforms like Kubernetes, Chef, Puppet, and AWS CloudFormation.  

Let’s test and deploy Wazuh in an example Linux environment. To do this, perform the following actions: 

Shell
sudo apt-get update  
sudo apt-get installcurl apt-transport-https lsb-release gnupg2 wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -    
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list   
sudo apt-get update 

Let’s install the Wazuh manager, its API, and NodeJS: 

Shell
sudo apt-get install wazuh-manager   
service wazuh-manager status   
sudo curl -sL https://deb.nodesource.com/setup_8.x | sudo bash -   
sudo apt-get install nodejs   
sudo apt-get install wazuh-api  
sudo service wazuh-api status  

Detecting and Blocking Attacks on Containers in the Cloud Environment

All leading cloud providers offer services that secure your container on the different layers. For example: 

  • Secure container registries  
  • Secure Kubernetes setup 

Azure has a service called Azure Defender. It can scan containers in the Azure Container Registry and in Azure Kubernetes Service. 

Azure Defender for Docker containers in the Azure registries has several triggers. For example, containers can be scanned when Docker images are pushed to your registry. Below, you can see an image with the Azure Defender Analysis report: 

Graphical user interface, text, application, email

Description automatically generated 

Detecting and Blocking Attacks on Orchestrated Microservices 

Containers are the base building block of a service that is managed by popular orchestrators like Kubernetes or Docker Swarm. There are many cases where the same image will be used in different areas of your infrastructure, depending on the specific service that image is providing to an application — think load balancer images like HAProxy or Nginx. 

Often, teams apply standard security policies to an image and don’t differentiate at the application and orchestration layer, which can leave holes in the service by trying to protect everything via blanketed policies. This is where tight integrations with an orchestrator and the metadata they provide is needed to differentiate between the services that are running an image. 

Core Practices: 

  • Differentiate between images’ base of orchestration and containers’ labels for more granular policies. 
  • Take actions like killing or pausing a container when a policy has been violated. Often, the orchestrator will spin up a new container, thus bringing the environment to a safe state. 
  • Rely on orchestration metadata rather than trying to manually update container image hashes. 

Example: Let's look into preventing data exfiltration in Kubernetes with Istio. Istio is a platform that provides security opportunities for: 

  • Containers   
  • Kubernetes  
  • Infrastructure as a Code  
  • Network 

First, you should install Minikube (or use cloud services like AKS) and Istio. Then follow this step-by-step guide from the IBM example repository. 

Section 5

Incident Response

Post-Mortem Analysis 

Incident response becomes harder in container environments because of their ephemeral nature. A bad actor can delete a container after an attack to remove all traces of any file access or network activity. 

Core Practices: 

  • Capture network, file, system call, and user data from inside the container around any policy violation. 
  • Commit containers that have policy violations so that they can be examined and ran outside of production. 
  • Use JWT token-based authentication for a Docker registry.  
  • Set up additional security policies with the admission controller and Open Policy Agent 

Example: Forensic analysis with the docker-forensics-toolkit. This is an open-source toolkit that performs analysis in your Docker environment based on HDD copies of the host. To test this tool: 

Run the following cmdlet to mount the host image: 

Shell
sudo python src/dof/main.py mount-image testimages/alpine-host/output-virtualbox-iso/packer-virtualbox-iso-*-disk001.vmdk.raw 

Then, execute tests with the following command:  

Shell
sudo pytest --image-mountpoint=/tmp/test-4-root-2 

This allows you to  

  • Gather information, metadata, and history for all images found on the PC 
  • Show image and container configuration files 
  • Print container log files 
  • Mount the container file system to any given location  
Section 6

Conclusion

In this Refcard, we’ve walked through the core practices of security before containers even enter production environments all the way up to the more complex analysis of a rootkit installation. We’ve used components baked into the Docker runtime, as well as open-source tools like Quay, Forensics-toolkit, and Istio to analyze real-world use cases such as greedy containers or mapping network communication. 

As you can see, Docker security can start very simply yet grow more complex as you take containers into production. Get experience early and then grow your security sophistication to what your environment requires. 

Like This Refcard? Read More From DZone

related article thumbnail

DZone Article

Automate Spring Boot App Deployment With GitLab CI and Docker
related article thumbnail

DZone Article

CI/CD for Containerized Microservices
related article thumbnail

DZone Article

The Invisible Risk in Your Middleware: A Next.js Flaw You Shouldn’t Ignore
related article thumbnail

DZone Article

Optimizing Your IDP Using Centralized Configuration Management With IBM Cloud App Configuration: A Complete Guide
related refcard thumbnail

Free DZone Refcard

SBOM Essentials
related refcard thumbnail

Free DZone Refcard

Secrets Management Core Practices
related refcard thumbnail

Free DZone Refcard

Software Supply Chain Security
related refcard thumbnail

Free DZone Refcard

Identity and Access Management

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: