Securing Containers

DZone 's Guide to

Securing Containers

Check out what these industry leaders and executives had to say about container security and best practices for keeping container safe.

· Cloud Zone ·
Free Resource

To gather insights on the current and future state of containers, we talked to executives from 26 companies. We asked, "What kind of security techniques and tools do you find most effective for securing containers?" Here's what they told us:

Security Policy

  • Do not have a separate security policy for containers. Access should be integrated into the directory. Permission should be granular and applied at the client/customer level. Secure data granularly at the foundation. Granular access control with access and authorization expressions by department, case, security, and level. 
  • Security is better with pod security policies. Define what components can talk to each other. Policy management across containers and clusters. There’s an ecosystem of innovation with CNCF. 
  • Harbor container registry scans images for vulnerabilities while also providing image signing and validation. PKS provides network security by leveraging NSX (VMware networking product). Enhanced NSX to support containers. Micro-segmentation at a granular level provides policy on the types of traffic. NSX/PKS controls traffic flow between pods – pods represent microservices. 
  • Our containers reside behind network infrastructure and load balancers where we build network and security rules to protect them.

Best Practices

  • Definitely securing the Docker Daemon. Most people just expose the daemon and assume TLS is secure enough; what they forget is that TLS certs are portable, and anyone with the key has 100% access to Docker. In enterprise deployments, it is imperative to break the link between a user and the daemon, ensuring that only authorized users can complete authorized tasks against Docker. In addition, knowing what is happening in your environment is paramount, knowing what containers are running vs what containers you expect to be running is key to ensure that you are not exposed to any “crypto-jacking” exploits where hackers gain access to an insecure Docker daemon and start bitcoin miners on your Docker Hosts. A real-time visualizer of running containers is key to ensuring operations team spot differences. 
  • Apply best security practices. The cost to hire the right security staff and getting the right tools. 
  • Secret management. Hashicorp vault. No keys are stored in configuration. They go away when containers go down. Use 12-factor applications. All container traffic through a secure proxy micro gateway. We put a trust management layer on top for the last mile between containers. 
  • Multi-faceted private container registry on top of a bare-metal platform. Use a standard CI/CD platform across all of the teams. Conform to security standards. Access to what’s going on. Ability to own a private server. More DevSecOps with security testing upfront.We use enterprise grade K8 clusters which are secure by default. Every worker node has default characteristics for security. Turn on app armory policies. Do Nexus scans. Use Calico as a networking provider and to lock down nodes against public access, except for K8, for truly private worker nodes. All communications via PMS. Encrypt all images and data and give customers the encryption key. 
  • We have our own orchestration framework, but it needs more isolation and hardening on the host which is VMware on Amazon. As we move to Apache Mesos we will take security into account with the pass-through model. Make sure the host is hardened. Develop best practices around security. 
  • We rely upon multiple factors to keep our containers secure, from ensuring we’re following best-practices in running our services (e.g. ensuring services run as non-privileged users, ensuring our libraries are up-to-date) to security systems that keep attackers from accessing the system (e.g. firewalls, restricted access lists, intrusion detection systems). 
  • Microservices architecture ensures each service is secure. We employ the weakest link philosophy. Multiple services will need to be secure by themselves. Have a security mindset because we’re dealing with people’s money. Workload identity (credentials) versus authorization. Use certificates to authenticate services. 
  • Know if there are CVEs within the images. 1) Let the customer know if there are issues with running containers rogue or running with vulnerabilities. If running as roots, is the image writable? 92% of images are running as roots. People think containers are easy but there are a lot of “gotchas.” Protect the host with the platform.


  • Application and dependency management problem. We focus on dependencies from a build run perspective looking backward at transitive dependencies. Visibility into open SSL DLib – define where calling different packages and automatically update calling declarative builds. You can query fleet to identify vulnerable code, as well as rebuild, deploy, and update the entire fleet.
  • Virtualization run transparently on the hypervisor. This allows file systems to be easily shared with containers through a 9P protocol. People also invest in the Linux APIs that are hardened. However, those are not fast enough, and you still need to run in a virtual machine to stay secure.
  • CI/CD process runs in open source and in-house scan containers for licenses and check in/check out. Make the base image (Alpine) start before deploying to ensure the copy is secure. The final stage is monitoring with Elk and Sysdig.
  • Containers provide a new virtualization layer, which naturally requires a new way to secure it. A security solution must be able to integrate seamlessly, be lightweight, run distributed, be accurate, respond in real-time, and operate at cloud scale. It must be automated because the orchestration model for application containers is highly automated. The explosion of micro-services brings a huge increase in east-west (internal) network communication, which increases the overall attack surface. The most effective security solution should cover multiple attack vectors for this new environment. These vectors include container network security, monitoring of activity within a container, and host security. For networking, because containers focus on application virtualization, the best network security technology for containers will use application layer (layer 7) inspection to get the best intelligence.
  • Security is orthogonal to containers because securing containers is different than securing VMs and you will need to do both since you’ll be living in a hybrid environment for a while. You need to work with a common denominator and treat containerization independent of security.
  • Containers have a good security model and are isolated. You have an operating system layer with access and standards controls and encryption.
  • There are multiple layers: 1) repository of container images; 2) cluster of nodes; 3) container layer; 4) deployment layer; and container hosts. You need to ensure the entire landscape is secure as well as all interactions.
  • Docker image scanning tools; RBAC in Kubernetes and Credentials in Jenkins.
  • 1) New platforms like Twistlock inspect container images for vulnerabilities. 2) Ability to know and track the provenance of container images. When there’s a vulnerability or an upgrade is needed it’s easier to change dependency settings and reroll the image build system. The speed, agility, and nimbleness of containers help with their security.

Here’s who we spoke to:

best practices, container security, containerization, containers, executive insights

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}