What Is Kubernetes?: The Container Orchestration Tool
Check out this brief introduction and explanation of how Kubernetes works as a container service, and how it's practically applied.
Join the DZone community and get the full member experience.Join For Free
We all know how important containers have become in today's fast-moving IT world. Pretty much every big organization has moved out of their traditional approach of using virtual machines and started using containers for deployment. So, it's high time you understand what is Kubernetes.
If you want to read more about the advantages of containers and how companies are reshaping their deployment architecture with Docker, then click here.
Kubernetes is an open-source container management (orchestration) tool. It's container management responsibilities include container deployment, scaling & descaling of containers & container load balancing.
Note: Kubernetes is not a containerization platform. It is a multi-container management solution.
Going by the definition, you might feel Kubernetes is very ordinary and unimportant. But trust me, this world needs Kubernetes for managing containers, as much as it needs Docker for creating them. Let me tell you why! If you would favor a video explanation on the same, then you can go through the below video.
Why Use Kubernetes?
Companies out there may be using Docker or Rocket or simply Linux containers for containerizing their applications. But, whatever it is, they use it on a massive scale. They don't stop at using 1 or 2 containers in Prod. But rather, 10's or 100's of containers for load balancing the traffic and ensuring high availability.
Keep in mind that, as the traffic increases, they have to scale up the number of containers to service the number of requests that come in every second. And, they have to also scale down the containers when the demand is less. Can all this be done natively?
Well, to be honest, I'm not sure it can be done. Even if it can be done, it is only after loads of manual effort for managing those containers. So, the real question is, is it really worth it? Won't automated intervention make life easier? Absolutely it will!
That is why the need for container management tools is so critical. Both Docker Swarm and Kubernetes are popular tools for container management and orchestration, but Kubernetes is the undisputed market leader. Partly because it is Google's brainchild and partly because of its better functionality.
Logically speaking, Docker Swarm is a better option because it runs right on top of Docker, right? If I were you, I would have had the same doubt and it would have been my first question. So, if your thinking the same, read this blog on the comparison between Kubernetes vs. Docker Swarm here.
This is the right time to talk about Kubernetes' features.
If I could choose my pick between the two, then it would have to be Kubernetes. The reason simply being that auto-scaling of containers based on traffic needs. However, Docker Swarm is not intelligent enough to do auto-scaling.
This is the right time to talk about Kubernetes' features:
1. Automatic Binpacking
2. Service Discovery & Load balancing
5. Secret & Configuration Management
7. Horizontal Scaling
These were some of the notable features of Kubernetes. Let me delve into the attractive aspects of Kubernetes with a real-life implementation of it and how it solved a major industry worry.
Case Study: Kubernetes at the Center of Pokemon Go's Evolution
I'm pretty sure everyone reading this blog would have played this famous smartphone game, or at least you would have heard of this game. I'm so sure because this game smashed every record set by gaming applications in both the Android and iOS markets.
Pokemon Go was developed by Niantic Labs and was initially launched only in North America, Australia & New Zealand. In just a few weeks upon its worldwide release, the game reached 500+ million downloads with an average of 20+ million daily active users. These stats bettered those set by games like Candy Crush and Clash of Clans.
Pokemon Go: Game Backend with Kubernetes
The app backend was written in Java combined with libGDX. The program was hosted on a Java cloud with Google Cloud Bigtable NoSQL database. And this architecture was built on top of Kubernetes, making it their scaling strategy.
Rapid iteration of pushing updates worldwide was done thanks to MapReduce and in particular Cloud Dataflow for combining data, doing efficient MapReduce shuffles, and for scaling their infrastructure.
The actual challenge
For most big applications like this, the challenge is horizontal scaling. Horizontal scaling is when you are scaling up your servers for servicing the increasing the number of requests from multiple players and playing environments. But for this game in particular, vertical scaling was also a major challenge because of the changing environment of players in real-time. And this change also has to be reflected to all the others playing nearby because reflecting the same gaming world to everyone is how the game works. Each individual server's performance and specs also had to be scaled simultaneously, and this was the ultimate challenge which needed to be taken care of by Kubrenetes.
Not only did Kubernetes help in the horizontal and vertical scaling of containers, but it excelled in terms of engineering expectations. They planned their deployment for a basic estimate and the servers were ready for a maximum of 5x traffic. However, the game's popularity rose so much that, they had to scale up to 50x times. Ask engineers from other companies, and 95% of them will respond with their server meltdown stories and how their business went down crashing. But not at Niantic Labs.
Edward Wu, Director of Software Engineering, at Niantics said,
"We knew we had something special on hand when these were exceeded in hours."
"We believe that people are healthier when they go outside and have a reason to be connected to others."
Pokemon Go surpassed all engineering expectations by 50x times and has managed to keep running despite its early launch problems. This became an inspiration and a benchmark for modern-day augmented reality games as it inspired users to walk over 5.4 billion miles in a year. The implementation at Niantic Labs, thus made this the largest Kubernetes ever deployed.
So, now let me explain the working architecture of Kubernetes.
Since Kubernetes implements a cluster computing background, everything works from inside a cluster. This cluster is hosted by one node acting as the "master" of the cluster, and other nodes as "nodes" which do the actual "containerization." Below is a diagram showing the same.
Master controls the cluster, and the nodes in it. It ensures the execution only happens in nodes and coordinates the act. Nodes host the containers; in-fact these containers are grouped logically to form Pods. Each node can run multiple such Pods, which are a group of containers, that interact with each other, for a deployment.
Replication Controller is Master's resource to ensure that the requested number of pods are always running on nodes. Service is an object on Master that provides load balancing across a replicated group of Pods.
So, that's the Kubernetes architecture in simple fashion. You can expect more details on the architecture in my next blog. A better news is, the next blog will also have a hands-on demonstration of installing Kubernetes cluster and deploying an application.
DevOps Online Training On that note, let me conclude this "what is Kubernetes" blog. For learning more on Kubernetes, you can check out Edureka's Kubernetes Training Certification here. You can also reach our DevOps Training Certification here.
Published at DZone with permission of Vardhan S, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.