5 Evolving Docker Technologies
Join the DZone community and get the full member experience.Join For Free
Since the announcement of Docker approximately 18 months ago there has been an explosion of new technology in this space. Although the list is becoming very long, here I will outline five evolving Docker-related technologies that are driving the direction that cloud technology is going.
At Dockercon this summer Google's VP of Infrastructure at Google, Eric Brewer, announced Kubernetes, which provides a way to orchestrate a collection of Docker containers across a cluster of machines. It is essentially a scheduler, which means it handles running your containers and ensuring uptime, even in the event of losing machines.
We have seen rapid adoption and interest in Kubernetes that goes beyond the buzz around it being a Google cloud technology. There is a need for orchestration at the Operations level which Kubernetes addresses well. A manifest describing a collection of Docker images can be created and pushed into the cluster which automatically deploys and horizontally scales those containers. Kubernetes also provides a way to define a "service", which can be consumed by other applications running in the cluster.
2) Docker Pods
Hand-in-hand with Kubernetes, Eric Brewer also talked about containers and introduced the concept of "pods". This is a key concept within Kubernetes. He said, "At Google we rarely deploy a single container." Instead, they group containers together. For instance, an application process often has several side-car processes for logging and other tasks outside of the concern of the application itself.
One issue he noted with Docker containers is the need for constant mapping of internal and external ports - between what the process inside the Docker container sees and what the external world sees. This is an additional layer of complexity that needs to be managed, stored and queried - even between the containers of a pod that has been deployed as a single unit. Therefore, at Google, they ensure that every pod of containers has its own IP address. This means the ports used can be the same inside and outside of a container. The ports can be baked in at design or build time. This does away with the additional layer of complexity of managing port. Now, to find pods running a particular service, you only need the list of IP addresses of those pods.
Google Compute Engine is currently the only cloud infrastructure service that facilitates assigning an IP subnet to a virtual machine - and hence an IP to each Docker pod within it.
CoreOS, who are actively involved with Kubernetes, have attempted to solve this problem with something they call Flannel (previously named Rudder). Flannel provides an overlay network on-top of the provided network, which allows for assigning an IP subnet to each machine. There is a performance cost to doing this, but they hope that this will be engineered away as Flannel evolves.
4) Docker For Windows
Recently, Microsoft joined the Docker bandwagon, saying they intended to build a containerization solution for Windows and provide a Docker compatible API on-top of this. Although, Docker images will unlikely ever be portable between Linux and Windows containers, it does mean that the tooling being built above the Docker API layer will be usable across these operating systems.
Large enterprises are heavily invested in Windows, so this announcement is a major win for the IT departments wanting to steer their ship in the direction of Docker adoption.
5) Cloud Foundry Diego
A major focus of ActiveState is the Cloud Foundry open-source PaaS project. We were the first adopters of this project when VMware announced it. While our solution, Stackato, already uses Docker under the hood, we think the Docker integrations arising from the Diego project is bringing the rest of the ecosystem in right direction.
Like Kubernetes, Diego is a scheduler, but unlike Kubernetes it is runtime environment agnostic and it is built to work primarily with Cloud Foundry. Docker is only one integration for Diego, but it is the sole focus of the developers working on Diego at IBM. We are also seeing Windows .NET integration with Diego from folks such as Uhuru and when Docker for Windows becomes a reality, I am sure we will see these technologies coalesce.
The PaaS integrations with Docker are very important for the future of Docker. We have seen developers being a major driving-force in the rise of Docker. The ability to easily express the environment that their applications should run in has been very powerful for developers. Also, knowing that anyone can pick up those Dockerized applications and immediately run them has filled a void for which a solution did not previously exist. PaaS is an extension of that with an even greater focus on developers, whereas we are seeing other technologies moving Docker away from developers and more into the realm of Operations teams.
This post is only the tip of the iceberg for the number of technologies in this ecosystem. New solutions are being announced almost every week and with each iteration existing technologies are bending and adapting according to the new landscape. The Docker ecosystem itself is evolving and evolving rapidly.
Published at DZone with permission of Phil Whelan, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.