{{announcement.body}}
{{announcement.title}}

Kubernetes KnowHow - Working With ReplicaSet

DZone 's Guide to

Kubernetes KnowHow - Working With ReplicaSet

Thanks to the replica set, you'll always be up and running.

· Cloud Zone ·
Free Resource

This is the third article of Kubernetes KnowHow series. In the first two articles, we have learned how to use pods and services in K8s. With pods and services, you have learned the core element of Kubernetes. However, in a production environment, we hardly deal with pods and services directly. In a production environment, you are more likely to work with deployment or replica set. In this article, we will learn what a replica set is and how to use it. So, let’s get started!

Pods can die anytime and be shortlived. Reasons could be anything such as pods consuming many resources or pods node crash or out of memory exception. If you deploy a pod directly, something we have been doing until now, then once the pod is crashed, then that’s it. There is no self-healing process. K8s will not reinstate the service that pod was providing. K8s does not resurrect pod. You are responsible for the lifetime of a pod. I believe in a more pragmatic approach, so I will demonstrate it.

I am assuming that the "Hello World" pod and service are up and running. Here, I am referring to pod/hello-world-webapp and service/hello-world-webapp.

Image title

Let's simulate crashing of the pod. The command to do so is  kubectl delete  . We can use this command to delete any K8s object. Now in case of a delete, it’s possible to have forceful termination or graceful termination.

This will delete the pod gracefully as it will let the pod complete the shutdown sequence if it exists.

Image title

Now check the cluster by executing the command  kubectl get all . You will notice that pod is not listed; however, service is still present. Now if you try to access the /welcome page then you will be disappointed, as webpage will return a 404. That means pod has died and it’s not coming back and significance of service is ZERO.

Image title

My favorite dino has reappeared after a long time.

Now imagine that crashing is happening in production at 4 am, then what’s going to happen? If you had worked with Docker Swarm then you might expect the pod to spring back to the life. Something majestic should be done by K8S by resurrecting life into pod. However, once pod is gone then it’s not coming back.

In today’s digital world when we are talking about 99.99% as minimum availability, something like this is not at all tolerable. This is why we do not deploy pods directly as we deploy replica set.

Replica set comes to our rescue. The replica set is nothing but an additional piece of configuration to K8s. We specify the number of replicas to be instantiated in the beginning and managed by K8s at any point of time. Yes, it’s a self-healing feature of K8s in which K8s guarantees to manage the desired state of configuration. So for instance, if you specify three replicas of our Hello World pod, then it’s K8s is guarantee dto manage 3 copies at a time. You can also specify the number of replicas as one and then if for any reason the pod dies, then K8s will spring a new pod.

As I have described earlier, Replica Set is nothing but an additional configuration and the pod is wrapped inside the replica set. If you are thinking about having a separate definition for pod and a separate definition for replica set then stop thinking on that line. K8s developers are smart! Instead of having a separate ReplicaSet configuration, the pod definition is extended to incorporate the ReplicaSet configuration.

I will start with our pod definition.

Image title

Here comes replica set definition.

Image title



Tag


Type


Description

apiVersion

String

Versioned schema of represented objects

kind

String

REST resource that the object represents – ReplicaSet in this example

metadata

ObjectMeta

Details about metadata of object

name

metadata

It’s a unique name within the namespace. It defines the name of the replica set.

spec

ReplicaSetSpec

Spec defines the specification of the desired behavior of the ReplicaSet

selector

spec


It's a LabelSelector object.

Selector is a label query over pods that should match the replica count. Label keys and values that must match in order to be controlled by this replica set. It must match the pod template's labels

app

spec:selector:matchLabels

It’s a key and value is hello-world-webapp

replicas

spec

Replicas is the number of desired replicas

template

spec: containers


Object of PodTemplateSpec

Template is the object that describes the pod that will be created if insufficient replicas are detected

This section is nothing but the definition of pod.

metadata

spec: template

Standard list metadata

labels

spec: template: metadata

It’s a key value pair

spec

spec: template

Standard list metadata

containers

spec: template: spec

List of containers belonging to the pod. It’s the array of containers

name

spec: template: spec: containers

It’s the unique name of image. Name of the container specified as a DNS_LABEL

image

spec: template: spec: containers

The image that the container is running

You can refer to K8s docs for ReplicaSet.

Now comes the fascinating part: running the replica set. Let us see how to do so.

I assume you have minikube running. Check the same using  minikube status  command.

Image title

Now apply this configuration in order to run the container.

Image title

Pay special attention to the output of  kubectl get all . Now what is expected? The answer lies in the number of replicas that we have configured. Yes, we should see three replicas of our hello-world-webapp.

Image title

Check the status of the container and logs to ensure that everything went smoothly. Now you can observe that three instances of the same pod have similar logs.

Image title

Image title

Now let us repeat our experiment and crash one of the pods. The expectation is to see three pod instances. Let’s see whether K8s fulfills the promise of providing the desired state of configuration. Let us take out first instance, pod/hello-world-webapp-b6xwx. Ensure that pod gets deleted successfully.

Image title

Image title

Check the first instance now, pod/hello-world-webapp-p7hcb. Check the age of the pod and you will notice that its 48 seconds compared to other two pods with age of 52 minutes. Simply amazing! Isn’t it? K8s manages this majestically.

Now it’s time to apply the service if it was not applied earlier. Use  kubectl apply –f services.yaml .

Image title

Now you will notice that our "Hello World" welcome console is working.

Image title

Now let me answer few common questions pertaining to the replica set.

FAQs:

What is the significance of the selector in a replica set definition?

In simple terms, it’s used to for matchmaking. Based on the selector-key-value pair only, K8s figures out how many different pods a replica set definition encompasses? It works in similar fashion as the selector of pod and services work. Therefore, it is possible that one replica definition can be applied to many pods at the same time provided selector labels are matched.

What is the difference between the definition of a ReplicaSet and Pod?

If you have thoroughly analyzed the replica set definition and pod/service definition then you must have observed a striking difference. In pod/service definition, the first line is apiVersion: v1. Whereas, in replicaset definition, the first line is apiVersion: apps/v1. What is the story behind this?

This is so because of the modular structure of K8s. apiVersion is important because it gives flexibility to the K8s team to add new features and come up with a new version. ReplicaSet was not a core part of K8s and was considered an extension. It was the experimental feature, so it started with extensions/v1beta1. Later on, it became a core part of K8s, thus the v1. However, replica set belongs to apps group and pod and services belong to the core group. By default, group is core. Hence, it is not required to specify it explicitly as core/pod or core/service. However, it’s absolutely required in case of a replica set as the group is different.

Topics:
kubernetes ,docker ,spring boot ,replica sets ,cloud ,tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}