Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

A Primer on HTTP Load Balancing in Kubernetes Using Ingress on the Google Cloud Platform

DZone's Guide to

A Primer on HTTP Load Balancing in Kubernetes Using Ingress on the Google Cloud Platform

Learn how Kubernetes's new Ingress feature works for external load balancing and how to use it for HTTP load balancing on the Google Cloud Platform.

· Performance Zone
Free Resource

Containerized applications and Kubernetes adoption in cloud environments are on the rise. One of the challenges while deploying applications in Kubernetes is exposing these containerized applications to the outside world. This blog explores different options via which applications can be externally accessed with focus on Ingress - a new feature in Kubernetes that provides an external load balancer. This blog also provides a simple hand-on tutorial on Google Cloud Platform (GCP).  

Ingress is the new feature (currently in beta) from Kubernetes, which aspires to be an Application Load Balancer to simplify the ability to expose your applications and services to the outside world. It can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name based virtual hosting, etc. Before we dive into Ingress, let's look at some of the alternatives currently available that help expose your applications, their complexities/limitations and then try to understand Ingress and how it addresses these problems.

Current Ways to Expose Applications Externally

There are certain ways using which you can expose your applications externally. Let's look at each of them:

Expose Pod

You can expose your application directly from your pod by using a port from the node which is running your pod, mapping that port to a port exposed by your container and using the combination of your HOST-IP:HOST-PORT to access your application externally. This is similar to what you would have done when running Docker containers directly without using Kubernetes. Using Kubernetes, you can use hostPort setting in a service configuration, which will do the same thing. Another approach is to set hostNetwork: true in a service configuration to use the host's network interface from your pod.

Limitations:

  • In both scenarios, you should take extra care to avoid port conflicts at the host, and possibly some issues with packet routing and name resolutions.
  • This would limit you to running only one replica of the pod per cluster node, as the hostport you use is unique and can bind with only one service.

Expose Services

Kubernetes services primarily work to interconnect different pods which constitute an application. You can scale the pods of your application very easily using services. Services are not primarily intended for external access, but there are some accepted ways to expose services to the external world.

Basically, services provide a routing, balancing, and discovery mechanism for the pod's endpoints. Services target pods using selectors, and can map container ports to service ports. A service exposes one or more ports, although usually, you will find that only one is defined.

A service can be exposed using three ServiceType choices:

  • ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType.
  • NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.Here NodePort remains fixed and NodeIP can be any node IP of your Kubernetes cluster.
  • LoadBalancer: Exposes the service externally using a cloud provider’s load balancer (eg. AWS ELB). NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
  • ExternalName: Maps the service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. This requires version 1.7 or higher of kube-dns.

Limitations:

  • If we choose NodePort to expose our services, Kubernetes will generate ports corresponding to the ports of your pods in the range of 30000-32767. You will need to add an external proxy layer that uses DNAT to expose more friendly ports. The external proxy layer will also have to take care of load balancing so that you leverage the power of your pod replicas. Also, it would not be easy to add TLS or simple host header routing rules to the external service.
  • ClusterIP and ExternalName similarly, while easy to use, have the limitation where we can add any routing or load balancing rules.
  • Choosing LoadBalancer is probably the easiest of all methods to get your service exposed to the internet. The problem is that there is no standard way of telling a Kubernetes service about the elements that a balancer requires, again TLS and host headers are left out. Another limitation is reliance on an external load balancer (AWS's ELB, GCP's Cloud Load Balancer, etc.)

Endpoints

Endpoints are usually automatically created by services, unless you are using headless services and adding the endpoints manually. An endpoint is a host:port tuple registered at Kubernetes, and in the service context it is used to route traffic. The service tracks the endpoints as pods, that match the selector are created, deleted and modified. Individually, endpoints are not useful to expose services, since they are to some extent ephemeral objects.

If you can rely on your cloud provider to correctly implement the LoadBalancer for their API, to keep up-to-date with Kubernetes releases, and you are happy with their management interfaces for DNS and certificates, then setting up your services as type LoadBalancer is quite acceptable.

On the other hand, if you want to manage load balancing systems manually and set up port mappings yourself, NodePort is a low-complexity solution. If you are directly using Endpoints to expose external traffic, perhaps you already know what you are doing (but consider that you might have made a mistake, there could be another option).

Given that none of these elements has been originally designed to expose services to the internet, their functionality may seem limited for this purpose.

Understanding Ingress

Traditionally, you would create a LoadBalancer service for each public application you want to expose. Ingress gives you a way to route requests to services based on the request host or path, centralizing a number of services into a single entrypoint.

Ingress is split up into two main pieces. The first is an Ingress resource, which defines how you want requests routed to the backing services and second is the Ingress Controller, which does the routing and also keeps track of the changes on a service level.

Ingress Resources

The Ingress resource is a set of rules that map to Kubernetes services. Ingress resources are defined purely within Kubernetes as an object that other entities can watch and respond to.

Ingress Supports defining following rules in beta stage:

  • host header:  Forward traffic based on domain names.
  • paths: Looks for a match at the beginning of the path.
  • TLS: If the ingress adds TLS, HTTPS and a certificate configured through a secret will be used.

When no host header rules are included at an Ingress, requests without a match will use that Ingress and be mapped to the backend service. You will usually do this to send a 404 page to requests for sites/paths which are not sent to the other services. Ingress tries to match requests to rules, and forwards them to backends, which are composed of a service and a port.

Ingress Controllers

Ingress controller is the entity which grants (or remove) access, based on the changes in the services, pods and Ingress resources. Ingress controller gets the state change data by directly calling Kubernetes API.

Ingress controllers are applications that watch Ingresses in the cluster and configure a balancer to apply those rules. You can configure any of the third party balancers like HAProxyNGINXVulcand, or Traefik to create your version of the Ingress controller.  Ingress controller should track the changes in ingress resources, services and pods and accordingly update configuration of the balancer.

Ingress controllers will usually track and communicate with endpoints behind services instead of using services directly. This way some network plumbing is avoided, and we can also manage the balancing strategy from the balancer. Some of the open source implementations of Ingress Controllers can be found here.

Now, let's do an exercise of setting up an HTTP Load Balancer using Ingress on Google Cloud Platform (GCP), which has already integrated the ingress feature in its Container Engine (GKE) service.

Ingress-Based HTTP Load Balancer in Google Cloud Platform

The tutorial assumes that you have your GCP account setup done and a default project created. We will first create a Container cluster, followed by deployment of a nginx server service and an echoserver service. Then we will setup an ingress resource for both the services, which will configure the HTTP Load Balancer provided by GCP

Basic Setup

Get your project ID by going to the “Project info” section in your GCP dashboard. Start the Cloud Shell terminal, set your project id and the compute/zone in which you want to create your cluster.

$ gcloud config set project glassy-chalice-129514
$ gcloud config set compute/zone us-east1-d
# Create a 3 node cluster with name “loadbalancedcluster”
$ gcloud container clusters create loadbalancedcluster

Fetch the cluster credentials for the kubectl tool:

$ gcloud container clusters get-credentials loadbalancedcluster --zone us-east1-d --project glassy-chalice-129514

Step 1: Deploy an NGINX Server and Echoserver Service

$ kubectl run nginx --image=nginx --port=80
$ kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080
$ kubectl get deployments
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
echoserver   1         1         1            1           15s
nginx        1         1         1            1           26m

Step 2: Expose Your NGINX and Echoserver Deployment as a Service Internally

Create a service resource to make the Nginx and Echoserver deployment reachable within your container cluster:

$ kubectl expose deployment nginx --target-port=80  --type=NodePort
$ kubectl expose deployment echoserver --target-port=8080 --type=NodePort

When you create a Service of type NodePort with this command, Container Engine makes your Service available on a randomly-selected high port number (e.g. 30746) on all the nodes in your cluster. Verify the Service was created and a node port was allocated:

$ kubectl get service nginx
NAME      CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx     10.47.245.54   <nodes>       80:30746/TCP   20s

$ kubectl get service echoserver
NAME         CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
echoserver   10.47.251.9   <nodes>       8080:32301/TCP   33s

In the output above, the node port for the nginx Service is 30746 and for echoserver service is 32301. Also, note that there is no external IP allocated for this Services. Since the Container Engine nodes are not externally accessible by default, creating this Service does not make your application accessible from the Internet. To make your HTTP(S) web server application publicly accessible, you need to create an Ingress resource.

Step 3: Create an Ingress Resource

On Container Engine, Ingress is implemented using Cloud Load Balancing. When you create an Ingress in your cluster, Container Engine creates an HTTP(S) load balancer and configures it to route traffic to your application. Container Engine has internally defined an Ingress Controller, which takes the Ingress resource as input for setting up proxy rules and talk to Kubernetes API to get the service related information.

The following config file defines an Ingress resource that directs traffic to your Nginx and Echoserver:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: fanout-ingress
spec:
 rules:
 - http:
     paths:
     - path: /
       backend:
         serviceName: nginx
         servicePort: 80
     - path: /echo
       backend:
         serviceName: echoserver
         servicePort: 8080

To deploy this Ingress resource run in the cloud shell:

$ kubectl apply -f basic-ingress.yaml

Step 4: Access Your Application

Find out the external IP address of the load balancer serving your application by running

$ kubectl get ingress fanout-ingress
NAME             HOSTS     ADDRESS          PORTS     AGE
fanout-ingress   *         130.211.36.168   80        36s

Use http://<external-ip-address> and http://<external-ip-address>/echo to access Nginx and the Echoserver.

Screen Shot 2017-06-08 at 4.28.49 PM.png

Ingresses are simple and very easy to deploy, and really fun to play with. However, it’s currently in beta phase and is missing some of the features that may restrict it from production use. Stay tuned to get updates in Ingress on Kubernetes page and their GitHub repo.

Topics:
cloud native ,containers ,infrastructure ,software architecture ,kubernetes ,load balancing ,performance

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}