To understand the current and future state of containers, we gathered insights from 33 IT executives who are actively using containers. We asked, "What do developers need to keep in mind when working on containers?"
Here's what they told us:
- Don’t be afraid of failure. Failure is the best learning curve. It’s ok to fail. Pick yourself up and develop and secure better. Be mindful of data security at rest and on the wire, host, app, and access to the containers. It will be painful when you fail but learn from it and make it better.
- Security, storage, and networking are the three things to be concerned with when working with containers. When designing a containerized ecosystem, close holes for security, ability to scale networking, and compute. Build seamlessly to work on the cloud and on-prem.
- 1) Developers should be more cognizant of the blast radius of what they are doing in containerized environments. 2) Developers should also adopt tools that give them more insight into what they are doing within a container-based workload by embracing security tools that perform continuous analysis on all activities within a container environment from a granular level. Having a pulse on all these activities will help harden your security posture to a level that doesn’t get you in the news.
- Learn Kubernetes (K8s) since that's where the code will run. Understand the security and privacy implications of the data you are manipulating.
- As a security practitioner, I would hope developers keep in mind nobody is perfect. The style of rapid time to market that is further enabled by containers must be used responsibly. Proper configuration, visibility, and deployment steps must be taken to ensure containers are adopted securely and trusted by customers.
- 1) When taking legacy applications to containers (or developing new applications), developers need to consider and re-architect their applications to run and take advantage of the highly distributed environment containers provide. Many legacy applications and software products were designed to run in a scale-up architecture and are delivered as a single stack of tightly coupled software (i.e. single binary executable). In order to leverage the distributed nature and scale out benefits containers offer, developers need to redesign and break apart individual components and services so that they can run, scale, and be upgraded independently. 2) Developers and DevOps deployment engineers will also need to consider how best to secure their containers. Containers share an OS kernel and require a root-level authorization (in Linux environments) to run and perform such tasks as accessing persistent storage. Attackers have the potential to extend their threat beyond the container and into an underlying OS and other containers if container security is not implemented.
- As more companies rely on containers, developers need to be cognizant of the security of containers from the beginning. The more they can adopt frameworks and guardrails, the more they will help the security teams manage the security of containers.
- 1) Containers do not contain; each container should be treated as if it were an individual operating system, which means that, over time, they too will become susceptible to security vulnerabilities and other bugs related to the specific pieces of software running in each of these containers. Most container run-times just share the kernel between each of the containers without providing any additional isolation, which means that if one container is exploited, the probability of other containers and the underlying host being exploited is quite high. 2) There are ways to mitigate these problems: CI/CD pipelines with automated tests can be used to rebuild, test and re-deploy your application automatically with the latest fixes. Container run-times like Kata can also help mitigate problems as they provide more isolation for containers by using the functionality of the underlying hardware (i.e. CPU Virtualization instructions). Third-party products can help when it comes to security scanning not just during deployment, but also at the development stage as well. 3) Container incompatibility: despite the nice portability of containers, in some circumstances, there may be incompatibilities with your container and the underlying operating system or container PaaS you are using. One such example is if your application within the container invokes or relies on kernel functionality which is not yet available in the kernel provided by the underlying host OS or PaaS, you will most likely see issues. 4) We recommend reviewing the twelve-factor application design documentation and reading a book on microservice design, such as Building Microservices: Designing Fine-Grained Systems by Sam Newman. Platforms like K8s are designed around these architectures, so gaining a good working knowledge of these is recommended.
- Don't always look for what is faster and easier. Visual basic was cool when I was in college, but it died out. Now everyone is doing NodeJS development which is more complex. With open source making things so prolific, be careful with hype and fashion versus actual technical tradeoffs and understanding engineering. Bring process engineering best practice – refine processes. Start to understand the cost of the process engineering. Strike a balance between right tools and freedom but make choices based on process engineering rather than fashion statements.
- Look for examples. Traditionally for a particular programing language, there is always a “HelloWorld” tutorial. Look for a “HelloWorld” tutorial for K8s, Docker, and Docker Swarm. Learn how to organize an application to optimize for containers and orchestration guidelines like the 12-Factor application methodology. The more the application conforms to the guidelines, the easier it will run on the orchestration framework.
- Understanding that containers are immutable is important, and the fact that the entire architecture needs to be twelve-factor. It’s not just "lift and shift" your applications and you’re done. Developers need to understand the increased complexity of configurations containers introduce.
- Know "The Twelve-Factor-App" by heart. Practice this methodology!
- Have intermediary milestones. There is danger in reading about an elaborate thing with an intense multi-node system in one fell swoop. It’s doesn’t happen like that. Understand what it does and does not do. Pause and fully understand before you take the next step. Learn about how the system behaves and performs. Understand what happens with container and pod management at a larger scale so you can anticipate what is going to happen.
- Think about real-time and distributed logging and tracing to enable better debugging and monitoring in production environments.
- 70% of companies are fully on K8s, 25% are evaluating. Kubeless (FaaS) is a great way to get access to containers since tt simplifies K8s even further and hides everything in the background. The continued success of CI/CD pipelines keeps things going.
- Containers are lightweight and purpose-built. A container should do one thing. Containers require a design philosophy similar to microservices.
- If you are coder working in an IDE like Eclipse or Visual Studio you don’t have to worry about containers too much. Containers pick up in the build release DevOps process. If you write Chef, Puppet deployment scripts look at Docker and K8s. Docker is easy to learn. The K8s learning curve is a little higher, but tools are coming out to make it easier.
- Think more broadly. Containers are necessary but not sufficient. Get the right match for the workload. Don’t use technology for technology's sake.
- Just like the tooling, look at which technologies need to work within containers with a smaller profile like GoLang. Look for a more streamlined approach to move between environments.
- Most developers, when starting a new application, don’t need to start with a container depending on the problem they are trying to solve. Technology is just an enabler. Developers want to get the idea out quickly to test and learn rather than use an architecture you don’t need yet. Be smart about what technology and tooling to employ based on your idea. A traditional VM may be fine until you reach a certain level or scale.
- Keep it simple. Watch what’s installed in the container. A container is supposed to be stateless. Have accurate error codes.
- 1) Read the documentation, test the performance, and then read the documentation again. 2) Break your container scheduler (K8s, Docker Swarm, etc.) and see what happens (e.g. turn off the etcd – do your pods still work? Do your load balancers still do their job? What happens with autoscaling?)
- Challenge the status quo. Don’t package up old applications in containers and expect them to work substantially better. In some cases, they might, but it’s always important to question why you’re taking this approach. Could you improve customer experience or supportability using containers, but also by changing a part of your architecture? These types of questions are very important to ask while adopting containers, along with refactoring or modernizing applications.
- Developers have previously been inhibited by container adoption due to challenges with packaging applications and their dependencies—including definitions, configurations, metadata, security keys and more. As a result, customers have found themselves spending time configuring their environments for the software to work together. Being able to streamline managing jobs and automating the deployment of ready-to-run clusters on-premises or on a customer’s choice of public clouds is key to using and deploying containers in HPC environments. Likewise, having an effective workload management solution in place is critical to improving efficiency in HPC, analytics, and AI.
- Developers working on container-based applications need to approach design and architecture with microservices in mind. A best practice is ensuring a microservice and container have only a single function. It’s also important to make sure data is fully accessible within the distributed system (where microservices and containers scale up and down dynamically). And, because containers undergo rolling updates, it’s important communication between applications be as stateless as can be achieved.
- Demystify it, there are a lot of misconceptions of what containers are and how they work. Get hands-on experience with containers. Take a simple application and try packaging with and without Docker. Get a greater understanding of the problems containers really solve.
- 1) Remember it’s people first. Orchestration and microservices are benefits, switching to CI/CD starts by making sure the organization is set up in the proper way to support service-level development. Start at the cultural level to be more successful. 2) Distributed systems are not easy. Clients have hundreds and thousands of services. Imagine debugging, securing such an infrastructure is not trivial. Automate but make sure when things go wrong you can understand what’s happened.
- As they design and develop their applications, developers need to keep in mind that resources are dynamic and can frequently change—storage and processing resources can be added in very granular amounts, making it crucial that applications are designed for simple, elastic scaling and that applications are written to be aware of the resources available to them at all times. In addition, the burden on application developers to monitor and react to infrastructure failures is reduced by modern container management solutions, provided that applications are designed to operate smoothly throughout the recovery process handled by container infrastructure.
Here’s who we spoke to:
- Tim Curless, Solutions Principal, AHEAD
- Gadi Naor, CTO and Co-founder, Alcide
- Carmine Rimi, Product Manager, Canonical
- Sanjay Challa, Director of Product Management, Datical
- OJ Ngo, CTO, DH2i
- Shiv Ramji, V.P. Product, DigitalOcean
- Antony Edwards, COO, Eggplant
- Anders Wallgren, CTO, Electric Cloud
- Armon Dadgar, Founder and CTO, HashiCorp
- Gaurav Yadav, Founding Engineer Product Manager, Hedvig
- Ben Bromhead, Chief Technology Officer, Instaclustr
- Jim Scott, Director, Enterprise Architecture, MapR
- Vesna Soraic, Senior Product Marketing Manager, ITOM, Micro Focus
- Fei Huang, CEO, NeuVector
- Ryan Duguid, Chief Evangelist, Nintex
- Ariff Kassam, VP of Products and Joe Leslie, Senior Product Manager, NuoDB
- Bich Le, Chief Architect, Platform9
- Anand Shah, Software Development Manager, Provenir
- Sheng Liang, Co-founder and CEO, and Shannon Williams, Co-founder, Rancher Labs
- Scott McCarty, Principal Product Manager - Containers, Red Hat
- Dave Blakey, CEO, Snapt
- Keith Kuchler, V.P. Engineering, SolarWinds
- Edmond Cullen, Practice Principal Architect, SPR
- Ali Golshan, CTO, StackRox
- Karthik Ramasamy, Co-Founder, Streamlio
- Loris Degioanni, CTO, Sysdig
- Todd Morneau, Director of Product Management, Threat Stack
- Rob Lalonde, VP and GM of Cloud, Univa
- Vincent Lussenburg, Director of DevOps Strategy; Andreas Prins, Vice President of Product Development; and Vincent Partington, Vice President Cloud Native Technology, XebiaLabs