K8s KnowHow — Running Deployment
This service ensures your audience doesn't experience downtime while you update.
Join the DZone community and get the full member experience.
Join For FreeThis is the fourth article of Kubernetes KnowHow series. In the first three articles, we learned how to use pods, services and replicaset in K8s.
Deployment is a controller that provides declarative updates for the pod and replica set. Deployment is a sophisticated form of the replica set. Deployment gives us a huge benefit along with replica set features. That benefit comes in the form of rolling updates that guarantee zero downtime. On the top of it, if something goes wrong then you can do elegant rollbacks. You may be thinking of learning one more YAML definition file and how complicated it could be. However, you do not have to worry about it as Deployment is just an extension of the replica set definition. Let me demonstrate first why we require rolling updates.
At this point, let me introduce to you to a new K8s term: "workload." K8s provides two basic constructs – pods and workloads. We have already seen pods. Workloads are the objects that set deployment rules for pods. Workloads provide the capability to define the rules for application scheduling, scaling, and upgrade.
Let us go back to our Spring Boot "Hello World" example. Imagine that currently you have release 1 of it and this is what is running in production. I will start with the pod definition. Now let us say your customer wants to upgrade to the latest version. What will you do in order to do so using pods? You would remove the existing pod and then deploy a new release, right?
Let’s see the current definition of pod and service.
Now let's say you want to upgrade to new release. So, you change the tag to v2.0.
Therefore, you change the pod definition as follows:
Now you apply the changes and after firing the command, you immediately check the welcome page. What happened? You received a 404 error, right? Let us see what is happening behind the scenes.
In the background, K8s kills the container and then gets the new image and deploys it. You can review the entire sequence by using the describe
command.
After some time, the welcome page is available. However, does that mean that everything is fixed? It is not, as you just have a downtime. In today’s digital world, when constant availability is of the utmost importance, how can one tolerate such a downtime? In the last article, we learned how to use a replica set. Let us see whether it addresses the zero downtime issue.
In case of ReplicaSet, once you deploy the ReplicaSet with a particular image, then subsequent changes do not take place as replica set does not support rollout feature. There is rolling update support for replication controllers. It is deprecated and recommended way for doing rolling upgrade is making use of Deployment feature.
So, let us now see the deployment definition. It is very simple. Just use the replica set definition and change the kind from ReplicaSet to Deployment.
If you have observed meticulously, then the definition of deployment is similar to that of ReplicaSet or Pod. In fact, the entire section after "template" is nothing but the pod definition. This is how K8s developers have maintained the resemblance between different objects such as pods, replica set, or deployment. All of these different objects are nothing but APIs in K8s.
Now apply changes with a command – kubectl apply –f workload.yaml
. Do not forget to reinstate the service for a node port.
Let us see what has changed in a cluster.
It is very fascinating to notice how everything is managed by K8s. We start with deployment and under the deployment section, observe the name “hellow-world-webapp-deployment.” Then scroll down a bit to replica set section and look at the suffix “659df68b4f.” It is nothing but an instance of the replica set and then go to pod section. Under pod, you will find two instances of "pod hello world." Why? Because of the number of replicas and as there are two instances, there are suffixes “ptpzt” and “pz8dl.” The name of the pod is becoming longer and complicated. Check whether the application is up and running by checking the URL of the application.
So, what is the advantage of deployment? Now it’s time to upgrade to version 2.0. Therefore, DevOps provides a new deployment definition with just one change, from 1.0 to 2.0.
We are using same file – workload.yaml. That’s the purpose of a workload file. It can include many pod definitions having different versions and generally, the whole application is divided into logical groups. For an instance group can be frontend, backend, business based on tier, and per group there can be hundreds of pod, and all pod definitions can be part of a single file.
Now apply the changes and immediately check whether the URL (http://<MINIKUBE_HOST>:<SERVICE_NODEPORT>/welcome) is accessible.
You will notice that the welcome page is accessible. Just for some more time, you will keep on getting a message “This is version 1.0 Welcome to MSA World!”
After a few seconds you will suddenly see a message “This is version 2.0. Welcome to MSA World!” Simply brilliant, isn’t it?
Behind the scenes, K8s starts to create a new replica set after new deployment configuration was applied. It is not released unless and until replicas are created and up and running. However, the old replica set is still intact and serving traffic. That also implies that the service is up and running. No downtime whatsoever! Once a new replica set is ready to be rolled, K8s just switched the old with the new one. Pay closer attention to the replica set section. It depends on how many times you have applied new deployment for the same replica set.
This is how K8s ensures and guarantees that there is no downtime.
For those who have suffered a pain of websphere/weblogic or any other application server deployment, this should awe-inspiring.
Now there is an option that K8s provides to configure any new deployment taking over precedence. The attribute name is minReadySeconds
and it means the minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available.
Another important attribute is DeploymentStrategy
.
Currently, only one value is supported – rollingUpdate
, and rollingUpdate
comes up with two options – maxSurge
and maxUnavailable
.
maxSurge
is the maximum number of pods that can be scheduled above the desired number of pods. The value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This cannot be 0 if MaxUnavailable is 0.
maxUnavailable
is the maximum number of pods that can be unavailable during the update. The value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxSurge is 0.
That's it for now. In the next article, I will explain how to use the K8s config map.
The code is available at my GitHub.
Opinions expressed by DZone contributors are their own.
Trending
-
Logging Best Practices Revisited [Video]
-
Redefining DevOps: The Transformative Power of Containerization
-
Playwright JavaScript Tutorial: A Complete Guide
-
Build a Simple Chat Server With gRPC in .Net Core
Comments