[This article was written by Eric.]
There have been lots of discussions during the past year about the security of Docker containers, but a majority of them seem to have been focused on just one aspect of containers: isolation. Kernel namespaces (process isolation), control groups (resource isolation) and traditional virtualization comparisons (hypervisor isolation) have been hot topics this past year and all discuss different aspects of the same core concept of isolation. Putting all your eggs in one basket has never been a good idea, and security professionals shouldn’t let a hyper focus on isolation create a distraction from security basics.
Let’s diverge from that discussion and instead explore a corner stone of computer security, vulnerability management and why applying it to the security of containers makes sense.
Even though containers are pitched as “just the application and it’s dependencies” there’s a whole lot more going on in a container than a single application. It’s the “and it’s dependencies” which is the interesting bit. All containers inherit a parent image, typically a base OS and with that comes a host of dependencies like:
- a bash shell
- default users
- libraries and dependent packages
Security at the container level just got interesting.
Let’s try running a CentOS 5.11-based container from the official Docker repository. It is a fully supported base image albeit based on installation media which means it is not being updated with security updates. This fact is highlighted IN CAPS and suggested that you update it if you’re going to use this image. There is a distinction here I’d like to highlight which is that the security of Docker itself is completely separate from the inherent security of publicly available images. It is the responsibility of end users to manage security at the container level, a prime reason why I think refocusing the security discussion back to the basics is important.
So, let’s get rolling and harness the power and ease of Docker! The commands we’ll use are:
docker pull centos:5.11 docker run -it --name “centos-5.11” centos:5.11 /bin/bash
Once the container is running, let’s install the Halo agent with a simple shell script.
#!/bin/sh # add the CloudPassage repository echo -e '[cloudpassage]\nname=CloudPassage\nbaseurl=http://packages.cloudpassage.com/redhat/$basearch\ngpgcheck=1' | tee /etc/yum.repos.d/cloudpassage.repo > /dev/null # import CloudPassage public key rpm --import http://packages.cloudpassage.com/cloudpassage.packages.key # update yum repositories yum check-update > /dev/null # install the agent yum -y install cphalo # run cphalo /opt/cloudpassage/bin/cphalo --daemon-key=abc123abc123abc123abc123abc123ab
Wow, that was pretty fast and simple. All the benefits of using a container materialized before your eyes! Once the Halo agent is installed, let’s look at those dependent packages from a vulnerability management perspective.
If application based containers aren’t rebuilt from an updated image on a regular basis, vulnerabilities will crop up in their dependencies. If your containers have shared volumes, or are linked to other containers (as most container based deployments surely will be) a security focus on isolation seems to be too narrow of a topic.
Isolation is appearing to create an all-or-nothing decision point on whether or not containers are a viable virtualization option when a much broader approach discussing risk and mitigation should be had. The conversation instead should be broader in scope from the beginning and encompass the basics like vulnerability management and everything else that SDSec brings to the conversation.