The Future of Containers

DZone 's Guide to

The Future of Containers

Containers continue to mature, adoption goes up, complexity goes down, and serverless arises.

· Cloud Zone ·
Free Resource

To understand the current and future state of containers, we gathered insights from 33 IT executives who are actively using containers. We asked, "What’s the future for containers from your point of view? Where do the greatest opportunities lie?"

Here's what they told us:


  • We look for more technology being used with AI, AR, and VR. There will be an explosion of adoption and innovation as people easily develop, deploy, and manage containers with AI. There will be more compute power to do things more quickly.
  • We foresee greater adoption. Containers are already highly penetrated in the organization. CNCF shows 60 to 70% deployment. But the percentage of the total computing workload running on Kubernetes (K8s) is much lower. As a result, there is a tremendous growth opportunity for K8s to get more of the workload.
  • More and more companies will discover the benefits of containers, not only because you can use them to build new applications, but really start refactoring existing applications and making effective use of things like horizontal scalability, which is offered by the underlying platforms. Use of containers will go mainstream, with companies moving from talking about the cloud and containers to using them in production. At the same time, thinking around security and compliance will also adapt. 
  • Containers provide a better story around state management in container orchestration and scheduling environments and better execution times to support use cases like serverless.
  • Container adoption will continue to grow. The ability to facilitate the rapid deployment of new technologies simply can’t be ignored. The fast pace of their deployment, management, and their ephemeral nature will set a rapid pace for the development of new features. Companies will be forced to keep pace with the rapidly changing technology environment enabled by containers to stay relevant. Areas such as security, orchestration, and development are all rife with opportunities for disruption!
  • Going forward, containers will serve as a key layer of enterprise application deployment and management infrastructure. The technology will only become more stable, standardized, and portable as it matures. I expect mature container technology will lend itself to use cases such as application intelligence, performance correlation, and more.
  • Like all good technology, containers become boring. Solutions providers get better at their packaging and distribution story. There will be more learning around how to insert trust around containers, ensuring they are not malicious, and how to prevent bloat.
  • We are seeing standardization around K8s for orchestration. This will accelerate the growth of the open source and commercial ecosystem, as well as drive tool development. We'll also see the stack mature with consistent offerings from cloud vendors. Even Microsoft, Amazon, and IBM are supporting K8s. In five years, enterprises not running K8s and Docker will be the minority.


  • Containers will continue to disappear into the background like any good technology. Tools make it easier to leverage technology. There will be a greater simplification in the deployment and use of containers.
  • Containers are a mechanism to build cloud-like applications – on-prem or in the cloud. They will be substantiated and provisioned as dynamically as possible. Containers become easier to handle and scale, with no single point of failure, and no single vendor. 
  • Containers are making things less complicated and becoming the new norm. Developers want to build all new apps in containers. People need to change how they look at building from scratch. Start by profiling the application so when it's released, it can be monitored from build to production to architect and scale.  
  • 1) Today K8s is not built for an app developer as its core persona. We need to make K8s easy for a developer to get up and running quickly. 2) We're seeing a trend for abstractions built on top of K8s like Knative and OpenFast with serverless functions deployed on top of K8 that abstract the knobs of K8s. As more projects mature and run natively, the technology gets more accessible to more developers. Only 28% of applications are running on containers. It's still early and there is an opportunity to make the technology more approachable and usable.
  • Containers are still too complicated. If you compare the amount of knowledge a developer needs today, it's way more complicated than five or six years because of the level of indirection. Five years ago, if you wanted to build a Python application, there were well-known standards. Now developers have to learn that as well as how to produce a Docker image, how to deploy on an orchestration system, how to pass config to the container, as well as all the detail about the security. Ultimately, developers will not have to deal with containers as higher levels of abstractions are built on top of containers.


  • There is rock-solid proof of the value containers provide in regards to developer experience and velocity. However, there will surely be improvements in container security. In the future, we envision a more secure variant of containers running sandboxed in Nano VMs, just like Kata containers or AWS Firecracker. Serverless functions will take a big chunk of work from traditional API applications.
  • There are two huge opportunities between the operators framework and how we describe automation. K8s has become the standard. Fast scripting works. Operators has the potential to do that across environments which is really powerful. We're always looking for 80/20 tool you could use. Operators is the new swiss army chainsaw after K8s that will be powerful for the next 10 to 15 years. Standardized application automation as you move into K8 with a standardized YAML language gets to a powerful place where we can get to a real service catalog. Serverless FaaS is pretty exciting, too, as it allows you to just focus on the logic of your app.
  • Containers are making it easy for everyone to go serverless. There's no need to rely on a machine with a VM – that’s going away. It's easier to up and go serverless. Containers are going to improve over time. There will be more options to have more applications running inside of containers. They will continue to change, improve, become more stable, recover from failure more quickly, while returning big money savings.
  • Serverless and FaaS are on the way. They aren't here yet and we're not sure how to manage them. Higher levels of abstractions are helping but also getting smaller components in the system. We need to figure out what machine to operate on and the function of the machine. As the size of the particles get smaller and smaller, you have to figure out how to manage and know what’s running where. Istio is a service mesh product that will help track all of the components. 
  • 1) At KubeCon 2018 last year (December 2018, Seattle, WA), the notion of "serverless" computing was one of the big topics and buzz pointing the future of container innovation — the idea of building and deploying virtually any type of application without provisioning or managing servers to run those applications. In addition, users will pay based on a usage model, only paying for the compute time consumed — and no charge when their applications are not running. 2) Containers will eventually replace virtual machines (VMs). Containers offer significant advantages over VMs (e.g. reduced deployment costs, significantly reduced startup performance, reduced machine footprint, and ease-of-use). As more companies and IT organizations use containers, there will be a large scale migration from running applications in VMs to containers. 3) Containers use will grow well beyond the use of Docker containers as the primary container type. Competitive offerings will become more widely accepted and used. Docker, the market leader, has strayed from developing a standard container technology and is focusing more on developing and marketing a full-scale application development platform. This diversion will result in other container products' growth in popularity and use.


  • Machine Learning (ML) and Artificial Intelligence (AI) (Apache Mesos / YARN / SPARK) has been gaining a lot of traction in recent years with data scientists having access to all sorts of data from different channels. Orchestration and container technology will prove to be a catalyst in the fire due to its inherent ease of distribution and scaling.
  • Containers are a major step on the path to increasing software re-usability, i.e. creating components in containers that can be easily reused in different systems. This is the vision that Corba and a hundred subsequent projects had, made progress towards, but never fully delivered. Containers are the next big step and have the potential to take us a long way towards that vision.
  • It’s no secret containers favor stateless applications. It is certainly possible to persist state, however, there are interesting implications in doing so. As we start to understand and innovate with containers, there is a great opportunity for new software design patterns to emerge. Things like Istio for service mesh are examples of this. What effects might new infrastructure patterns have on existing application patterns? What new problems can we solve, particularly around state management?
  • Containers are being embraced widely in HPC environments as companies look to use cloud bursting to dynamically add capacity or to shift workloads into public clouds to employ ML algorithms running on GPUs, for example. HPC environments are quickly emerging as one of the first examples of how multiple container engines that support a common standard for running container images will be deployed side-by-side over time. In the future, it’s probable container engines that are specifically optimized for specific classes of applications will be deployed alongside more general-purpose container engines such as Docker.
  • Everything that runs in the user space will run as an encapsulation with fewer packages, tools, and credentials. It will become more difficult to attack the host. K8s will become a cloud-native operating system. We're seeing a lot of companies building edge and IoT devices looking to K8s to manage deployments. K8s becomes the new operating system for all things distributed. Unlike traditional environments, K8s becomes the agnostic layer across all environments.
  • The greatest opportunity for containers lies in coordination between containers and applications. Supported by collaboration between infrastructure teams and application developers, applications can be developed that use their understanding of workload demands to leverage dynamic resource allocation and scheduling to deliver better and more efficient performance and scalability to meet application SLAs.
  • 1) There are many legacy applications in big companies which have yet to be ported to containers or any kind of microservices architecture. The benefits of 12-Factor Application design and a microservices architecture should give companies enough reason to move old workloads: simplified patching, application design, and delivery, decreased time-to-market and many other advantages will give huge cost savings if executed correctly. 2) Being able to port an application from one Container PaaS or platform to another is very powerful. This gives a company or developer the ability to target multiple platforms with a single container release and potentially move from one supplier to another if required, which avoids lock-in. For example, if you’re using Docker Swarm it should be relatively easy to move your workloads to a K8s cluster, as long as you are not reliant on any functionality or features specific to that implementation. We see a big future with this in edge computing, where each of the different edge deployments has very different hardware and scale: the portability of Docker and Ubuntu will help address this problem. 3) Over the last few years, we have seen a growth in GPU-enabled workloads and applications, even those which run in Docker containers, especially with the advent of vGPU technology. However, it is still very expensive. We believe we will see more workloads enabled by new FPGA devices which can be used for specific algorithms (such devices can now perform Monte Carlo simulations with good performance) or workloads to help empower the developer, but this has yet to go mainstream with K8s and containers. 4) The scheduler in K8s is still very basic and cannot yet provide application placement with the same level of configurability as a workload scheduler like Slurm. We think there is an opportunity for improvement as we see more and more customers interested and attempting to use containers for HPC workloads.
  • There are many areas in which container based software deployment could change the experience and market. In the future, we might install software in our PCs and phones as a container. However, there are many other use cases such as network function virtualization.

Here’s who we spoke to:

cloud, container, kubernetes, prediction, serverless

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}