DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Journey of HTTP Request in Kubernetes

Journey of HTTP Request in Kubernetes

This article will show how to expose an application using the service type load balancer.

Sharad Regoti user avatar by
Sharad Regoti
·
Oct. 11, 22 · Tutorial
Like (1)
Save
Tweet
Share
5.89K Views

Join the DZone community and get the full member experience.

Join For Free

In the previous article, we learned about the journey of deployment creation in Kubernetes. Which helped us understand almost every aspect of the core system components of Kubernetes, but it missed the following components:

  • Cloud Control Manager
  • Kube Proxy

We will use the same technique as the previous article to understand the above components. We will take a Kubernetes feature and break its implementation to understand how it interacts with the Kubernetes system components.

The feature for this article is going to be:

Exposing an application using the service type load balancer.

I highly recommend you read the previous article; I’ll be referring to some concepts from that article.

Having an understanding of the Service Resource of Kubernetes is a prerequisite for this article. If you don’t know what is a service type resource, No worries! Check out this article which explains the What, Why, and How of Kubernetes services.

Before starting, I want to let you know that this article is written to the best of my knowledge, and I’ll update it as I gain more insights into it.

Throughout this post, I’ll reference:

  • Cloud Controller Manager as “CCM.”
  • Kube Controller Manager as “KCM.”
  • Load Balancer as “LB.”
  • Kubernetes as “k8s.”

Let’s start our article with a question.

What Happens After Deployment?

So you have deployed your application on Kubernetes; that’s great!!!

But now, how will your users access this application? What I meant by access is, when the users type their domain name on the browser, how that domain will resolve to the IP address of the container residing in your Kubernetes cluster?

Don’t have the answer to it? No worries! That’s exactly what we are going to learn in this article.

For the domain name to the container IP address resolution to work, we need to expose your application to the outside world, where the service type Load Balancer of Kubernetes comes into the picture.

When you use this resource, It magically exposes your application on the internet. It provides you with a static external IP address that can be mapped to your domain name (using something like GoDaddy).

As per the below diagram, when your users type the domain name in the browser, it resolves to the external address provided by the Load Balancer service, which then redirects the request to the IP address of the container residing in Kubernetes.

https://raw.githubusercontent.com/sharadregoti/try-out/master/01-kubernetes-the-hard-way/journey-of-http-request-load-balancer.drawio.svg

Yeah, I know that part from the load balancer to the container IP looks scary, but that’s what we are going to demystify.

What Happens During Kubernetes Service Creation?

So for making our deployment accessible over the internet, we exposed our application by creating a service resource of type Load Balancer in K8s using the below configuration.

 
kind: Service
 apiVersion: v1
 metadata:
  name:  nginx
 spec:
  selector:
    app:  nginx
  type:  LoadBalancer
  ports:
  - name:  http
    port:  8080
    targetPort:  80


Runkubectl apply -f service.yaml to create the service type load balancer.

https://raw.githubusercontent.com/sharadregoti/try-out/master/01-kubernetes-the-hard-way/journey-of-deployment-resource-generalize-service.drawio.svg

As you know from the previous article, any K8s resource that is created gets persisted in the ETCD store via the api-server component. The api-server component notifies other system components responsible for handling service resources.

The other system component that gets notified are:

  • Kube Controller Manager
  • Cloud Controller Manager
  • Kube Proxy

The above components decide the fate of our HTTP request. To simplify our technical journey, let’s phase out our journey in sections. I have represented the journey of our HTTP request in the below diagram; we have a source, destination, and stopovers.


We will start by,

  1. Understanding the stopovers (highlighted in red).
  2. Then, understand how the stopovers connect with each other.
  3. Finally, we will run through an HTTP request example.

Understanding The Stop Overs

All the stopovers that you saw in the above image are created when a service-type load balancer is created.

What Does the Service Type Load Balancer Do?

  1. When we create a service-type load balancer on any Kubernetes cluster running in the cloud, a load balancer is provisioned by the cloud provider. As represented by stop-over-1.

    As we know, when a load balancer is created, a domain name is also provided by the cloud provider. That domain can be used to access our application.

  2. In K8s, the Load Balancer service type is a superset of NodePort and ClusterIP service; what that means is that the load balancer service contains both the features of NodePort and ClusterIP services.

    What are the features of this service, you might ask:

    • The Node Port service opens up some ports on the worker node for external traffic to enter the cluster, represented by stop-over-2 in the above diagram.
    • The ClusterIP service provides a static IP address that can be used inside K8s for communication, represented by stop-over-3 in the above diagram.

The Creation of Stop-Over-2 and Stop-Over-3

When the kube controller manager is notified about a service resource & as we know from our previous article, the KCM comprises many controllers, and one of those controllers is a service controller, which takes appropriate actions on the service resource.

So here’s what the service controller does for service type load balancer:

  1. First, it gets the list of pods that matches the selector field specified in the service resource (for e.g, where app: nginx); after that, it iterates over the pod list and notes down the IP address of the pod.

  2. Then it creates an internal static IP address called the clusterIP.

  3. Now it creates a new K8s resource called Endpoint which basically maps a single IP to multiple IPs. The single IP here is the clusterIP, and the multiple IPs correspond to the pod IPs.

    Whenever pods get created/destroyed in K8s, it is the responsibility of the service controller to update the corresponding pod IP addresses in the endpoint resource. So that the cluster IP always resolves to the latest Pod IP address.

  4. Finally, It exposes a random port (ranging from 30000-32768) on the worker node.

That’s it; this’s what the service controller does for the load balancer service type.

The Creation of Stop-Over-1

Now the question is, who provisions the load balancer on the cloud? When a service-type load balancer is created. Indeed, It is not the service controller.

Think about it, To provision a load balancer, you need to interact with the cloud provider's API, and no Kubernetes component directly talks to the cloud provider, right?

This is where the cloud controller manager comes into the picture. CCM is a pluggable component in the K8s architecture used to perform deep integration with the cloud provider.

A thing to note about CCM is that every cloud provider has its own implementation of CCM, so when you provision a managed K8s on the cloud. The provider runs its own implementation of CCM.

The api-server notifies the CCM about the load balancer service, then CCM kicks into action and starts provisioning a load balancer by using the cloud provider API.

Difference Between KCM and CCM

In our previous article, we learned about KCM also applies to CCM; the operational principles around the control loop apply, as does the leader election functionality.

CCM is also a collection of control loops; the difference is simply the concerns the controller address.

KCM contains control loops that concern themselves with core K8s functionality.

The CCM will not be found in every cluster; these controllers are concerned with the integration with the cloud provider; you may not have this component running in your local cluster; the CCM is concerned with reconciling the existing state with the cloud providers infrastructure to fulfill the desired state, they integrate with cloud provider API and often address vendor-specific concerns,

Such as provisioning Load Balancers, Persistent Storage, VMs, etc.

How Stop Overs Connect With Each Other?

Ok, Now we know all the stopovers. Let’s understand how the information flows through them.

https://raw.githubusercontent.com/sharadregoti/try-out/master/01-kubernetes-the-hard-way/journey-of-http-request-final.drawio.svg

The path from LB (3) to Worker nodes(4) is configured by CCM while provisioning the LB.

For configuration it requires 2 things:

  • IP address of the worker node.
  • Port on which worker nodes are listening for requests.

Every worker node has its own IP address; CCM has access to the IP address as it can interact with the cloud provider API. The port of the worker node is obtained by reading the service object, which is updated by KCM to reflect the random NodePort assignment.

With this, the load balancer is configured in such a way that it forwards the request it receives to any one of the configured worker nodes.

Data flow from Load Balancer To Node IP is easy-peasy, Totally handled by the cloud provider.

Once the request reaches the worker node, It is the responsibility of the OS to send the request to the appropriate container. Configuration of request routing on the OS level is done by configuring the IP tables of the OS.

And this configuration of IP tables is done by the kube-proxycomponent of K8s, residing on every worker node.

The thing is for the load balancer service type after KCM has done his job of assigning node port, creating clusterIP, and mapping cluster IP with Pod IPs in endpoint resource.

Kube-proxy takes all these configurations and configures the IP tables accordingly. Such that any traffic received on the worker node port gets redirected to the container port.

And here, the journey of our HTTP request comes to an end.

Kubernetes Load balancing (computing) Requests

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Fixing Bottlenecks in Your Microservices App Flows
  • A Beginner's Guide to Infrastructure as Code
  • How To Set Up and Run Cypress Test Cases in CI/CD TeamCity
  • How To Select Multiple Checkboxes in Selenium WebDriver Using Java

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: