A year back, I got started with Docker Containers. Two-three months down the road, I realized that, thankfully, I learned a revolutionary technology because of some of the following reasons:
- Quick learning: Docker containers were used to spin up my exploratory learning environment, on-demand demand, for any tool/framework/programming language I wanted to learn
- On-demand self-service environments: Docker containers were used to setup on-demand self-service dev/test environments. This was a huge productivity booster for developers and test engineers.
- Automated deployments: With Docker containers, Jenkins, Repository, etc, I saw the automated deployments getting created in dev/testing/UAT servers.
These are some of the areas where I felt the need for some tool to do following:
- Manage app cluster: Scale the apps in multiple Docker containers to meet app requirements. Let’s say there is a data pipeline composed of Flume, Kafka, and Spark containers. The need is to scale the pipeline to process a larger dataset that could be achieved by having multiple containers for Flume, Kafka, and Spark. In other words, use a cluster setup to start these apps to process larger datasets. Say, for instance, a Flume cluster passing data to Kafka cluster.
- Container orchestration: Manage the containers in terms of starting/stopping multiple containers running an app to perform on-demand service requirements. Take, for instance, starting/stopping Jenkins clusters to do CI jobs as required.
- Component repackaging: Sometimes, I felt like repackaging existing apps and starting them together to test different app configurations.
The requirements, such as those above, could easily be fulfilled by Kubernetes.
As I deep-dived in the world of Kubernetes, I realized that this is one of the coolest tools I came across in the recent past. For sure, it's feather in the cap for DevOps professionals working with Docker containers.
The following are some of the key building blocks of Kubernetes, which simplified the way I set up multiple containers together and were able to maintain a specific number of replicas at any point of time while exposing those containers as a service.
- Pods: Pods can be termed as a set of one or more colocated and co-managed containers sharing same namespace and same volume. Each pod tends to have an IP address associated with it, which can be used to access the app running within that pod. Pods can be used to colocate and co-manage multiple Docker containers having shared volumes.
- Services: Services provide a higher-level abstraction to pods. If one or more pods have to rely on other pods, this is done via “service”-level abstraction. Imagine a Kafka cluster exposed via Kubernetes service level abstraction.
- Replication controllers: Replication controllers maintain the same number of replicas of pods at any point of time. That essentially means that if one or more pods got terminated for any reason, the controller starts the same number of pods appropriately.
With the emerging trends of cloud-native apps where containers and microservices form the key components, Kubernetes has been identified as the most critical component to take the cloud-native apps management to a different level altogether. As a matter of fact, CNCF.io has also recognized Kubernetes as the first tool that serves the Cloud-native apps requirements. And, with Docker containers technology becoming the most popular containerization technology at this point in time, their marriage is only going to make both of them stronger than any other cloud-native configuration requiring container and container orchestration tool to work in tandem.