Securing Containers

DZone 's Guide to

Securing Containers

Configuration, automation, best practices, and TLS are just four of more than a dozen suggestions.

· Cloud Zone ·
Free Resource

To understand the current and future state of containers, we gathered insights from 33 IT executives who are actively using containers. We asked, "How are you securing containers for orchestration, deployment, and ongoing operation?"

Here's what they told us:


  • Security of the container is done through configuration. Recognize security has to scale out across the enterprise. Push for best practices to communicate over HTTPs. We’ve done things through configuration to encrypt and store data and credentials to fit with other enterprise solutions. You need to discover the realities of customer backends receiving non-secure traffic. Be cognizant that clients may have slower patching practices. 
  • Securing containers is not only a matter of securing the containers themselves, but also securing the application that is running in the container and securing the process of deploying the containers. These last two aspects are often forgotten but crucial for knowing exactly what is running where. By integrating code scanning and container scanning, and by hardening your platform, you can ensure a level of security because your existing application tools and container scanning tools become part of your CD pipeline.


  • We recommend using a holistic solution. Automate secure SDLC practices as well as CI/CD. This forces vulnerability scanning of container images. Think about how to monitor and profile for best practices. Use security benchmarks to facilitate profiling and reduce exposure. Pay more attention to mistakes as a result of the speed of doing business today.
  • Our tools are a little further down the pipeline. It's a challenge to let developers create whatever container on a laptop and ship to production. How do you know how the container was built and if a patch rebuild/redeploy is needed? You need to automate packaging and construction of containers themselves and define it as code so you are able to remake from scrap if necessary. Ensure only entrusted Docker containers are deployed into production. We only allow the deployment of Docker containers from our internal registry.

Best Practices

  • Although it is easier, it still needs to be secure. Be mindful of securing the host. Isolate the containers. This remains the number one consideration for development, deployment, and orchestration. 
  • Follow best practices, keeping things lightweight and simple. Look at potential vulnerabilities – SSL is what we look at now. Companies are picking up cloud-native faster than cloud-native security services. 
  • Make sure the data is encrypted and secured from end-to-end. Integrate with existing key management systems. 
  • This is a gap for many organizations. We are certainly intrigued by companies such as SysDig and Aqua Security, and there is a clear benefit to using these tools. You need to protect environments across development, build, and runtime to ensure compliance with whichever framework applies to your business. It goes beyond simply selecting a good tool. We recommend the “Shift-Left Security” approach and we are starting to see that more in the marketplace. Teams need to avoid leaving security out of the discussion. Security experts should be brought in from the design phases all the way to production—rather than designing and building something, failing an audit before going live, and setting yourself back six weeks for remediation. 
  • 1) We do vulnerability management and scanning built into CI/CD pipeline. We fail at build time.          2) When deploying, apply best practices and build best practices out of the box. 3) Follow configuration management and network policy. Wind down privileges, resources, and uses. Network layer 3 and layer 7 for access and control management. 4) Use runtime post-breach detection like forensics, investigation, and automated feedback loops. There is value around additional context, continuous hardening from build and deploy into runtime with all of logic policy and configuration live in the client’s infrastructure. Security tools become an extension of your infrastructure. 
  • Depending on the environment, container images need to be signed. Secure data at rest and data in motion, role-based access and running as non-root are common security guidelines.


  • Best practices for securing containers span multiple areas. For network communication between containers, TLS is highly recommended. Ensure containers only communicate using TLS. Certificates and privet keys should be passed to containers using secrets. Store secrets in a safe place for the long term. A common tool for this is Vault. Kubernetes Network Policies help with security. This provides a way for the user to describe secure boundaries between a group of containers. Containers run the way you construct a Docker image that has software from many sources in a single image. It may contain hundreds of libraries from other sources. You may find a security vulnerability in one of the libraries. It's important to ensure images are up to date and do not contain known vulnerabilities. Tools help DevOps detect the presence of images that are out of date or have vulnerabilities (e.g., Twistlock, Harbor open source project). Store images in a secure and trusted way. Scan content for vulnerabilities. 
  • Container images are created using the Linux package security update mechanism to ensure images include the latest security patches. Further, the container image is published to the Red Hat Container Catalog which requires these security measures to be applied as part of the publishing process. In addition, domain and database administrative commands are authenticated using TLS secure certificate authentication and LDAP, as well as domain meta-data, application SQL commands, and user data communications are all protected using the AES-256-CTR encryption cipher. 
  • TLS client or Candid-based authentication for driving remotely. RBAC is on the roadmap. Unprivileged containers, by default, run in isolated mode (no UID correspondence, no overlap between UIDs), SECCOMP/AppArmor, MAC isolation filtering, cgroups, limits on CPU, bandwidth usage (both network and storage). By leaving zero visibility from the container to the host there is no attack surface at all.


  • Out of the box. Containers provide isolation between them and to secure them even further we use a policy-driven lockdown of each node on which the containers are running.
  • The most basic image inside of containers will have certificates in place. A savvy systems administrator will provide the image to be used and disable services that shouldn’t be running.
  • Security is one of the great opportunities in the container platform. K8s is a powerful yet complex platform. We ensure the user configures K8s' underlying infrastructure correctly and applications are configured correctly. We also centralize multi-cluster management and policy management. We are able to enforce what workload works on what resource tool.
  • The microservice firewall is the core of what we do. We run policy-based cluster level scans that analyze deployment files, configuration and even access privileges to maximize governance and minimize risk. We also employ isolation at the microservice level to define rules on code that specifies the third party tools we use (such as Slack and Stackdriver). Our threat intelligence layer provides always-on early detection against malware, crypto mining and other types of attacks, detecting anomalies whether they are internal, or at the edge of external microservices.
  • Containers all run in virtual networks and have multiple security models on top of each service. We use Azure keys, API managers, and serverless functions with a job-based token-driven security model.
  • A bunch of people are creating and shipping a dependency tree. We walk through problems to understand the security of containers realizing the need to manage dependencies. Think through the lifecycle, runtime, and the entire container platform thinking about how they are configured how they meet the Phipps mode compliance. There’s a typical set of criteria you need to think about. Don’t forget about the lifecycle of dependencies on the development side.
  • Launch containers via a private registry. Images are secure and not pulled from a public repo. Make and test add-ons to scan containers before they are launched in production. Today our customers are able to install it themselves to make it easy and to have of all of the security out of the box.
  • The K8s environment is AKS so it’s behind the firewall. There is one point of access into a cluster. Everything is self-contained. This reduces threat vectors since everything is separated by the network within the container environment.
  • If you’re able to successfully manage container-based application, security lends agility to the process. Containers have helped reduce the friction to hand someone a container to run tests. Containers are much more agile. It's easier to get these things earlier in the SDLC. Containers reduce friction for copies and tests.  Avoid having collisions – two teams trying to use the environments at the same time. Containers enable you to set up multiple test environments rather than force people to get in line to access. You don't lose time just waiting for access. They help you get rid of small snowflake environments and frees release managers to meet the needs of the people in the organization.
  • It comes down to knowing what you’re doing and observing the principle of least privilege. Only install things that need to be there. Know why every port is open.
  • I think the challenge with container security, for us and across the board, is to build similar levels of security assurance as businesses have spent the last decade or so doing with virtual machines.
  • Currently, we provide visibility into Docker, K8s, AWS EKS, and AWS ECS. We’re constantly working to ensure our customers can deploy and leverage containers in a secure manner. We’re also keeping our eye on the horizon, monitoring myriad new technologies in the space to ensure our customers can adopt them as securely and quickly as possible.
  • Security is always top-of-mind for us and has been vastly improved in the last couple of years. When it comes to securing containers, the root issue now has been addressed in Docker and is not an issue with singularity.
  • By their nature, containerized environments and microservices offer a sizeable attack surface, as well as dynamic internal container-to-container communications that can allow attacks to escalate if not detected and thwarted. Therefore, securing these environments means establishing effective container network security and host security, and carefully monitoring container traffic. Developers need to safeguard their environments along multiple vectors – specifically, they should leverage security technology that feature layer 7 inspections to recognize potential issues at the application layer. Data loss prevention is also an increasingly critical container security topic, as production container environments handling personally identifiable information (PII) become more common and must comply with industry and governmental regulations that enforce proper handling of any sensitive data.
  • Build, run, respond. Make sure the full lifecycle of the container is taken care of. Offer solutions to scan software and container images during the build phase. During the run phase, the runtime agent is able to detect intrusions. When something happens able to collect forensic information, include all the data to analyze the blast radius and get an idea of the situation.

Here’s who we spoke to:

automation, best practices, cloud, cloud security, container security

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}