Over a million developers have joined DZone.

Kubernetes Metallb Bare Metal LoadBalancer

DZone's Guide to

Kubernetes Metallb Bare Metal LoadBalancer

You can now use LoadBalancer resources with Kubernetes Metallb, so that cloud deployments are no longer second-class.

· Performance Zone ·
Free Resource

Container Monitoring and Management eBook: Read about the new realities of containerization.

Bare Metal Kubernetes deployments are no longer second-class deployments. Now, you, too, can use LoadBalancer resources with Kubernetes Mmetallb.

Kubernetes is very flexible in how you can deploy it. You can deploy to cloud environments like Google Cloud, Microsoft Azure, and Amazon AWS. You can even deploy it to on-premises clouds like Openstack. Lasty, you can deploy Kubernetes on bare metal using several popular operating systems like Ubuntu Linux, CentOS, or Red Hat Enterprise Linux.

Kubernetes Pod Connectivity

Deploying you containerized applications to Kubernetes creates what Kubernetes calls a Pod. Pods are the smallest deployment unit in Kubernetes. Pods are assigned a single IP address that is internal to the Kubernetes cluster. Other pods are able to communicate with your pod by using this IP address. The IP address is good until the pod dies.

Pods have a life cycle and they are typically referred to as cattle rather than pets. All pods are created, perform their functions, then they die. If there is a problem with the pod then given this life cycle, we would just destroy it and recreate it. We don't really care too much about it hence the cattle reference.

The problem is the IP address that gets assigned to the pod. Every time a pod is recreated it is assigned a new IP address. If you have other pods that depend on this pod then you would have a problem every time the pod is recreated. To remedy this, Kubernetes has services. Services have an IP assigned to them and they act like load balancers to matching pods. If the pod that is assigned to a service is recreated the service knows about the new IP and can still send traffic to it. The service IP remains the same.

Exposing Kubernetes Services

So we have solved the pod connectivity problem, but what if we want to expose our services to the outside world? Kubernetes services have internal cluster IPs assigned to them which are not accessible to the outside world. Kubernetes provides several ways to expose these services: NodePort, HostNetworking, Ingress, and LoadBalancer. I won't cover ClusterIP here because this gives you a kubernetes cluster local IP that isn't accessible outside the cluster. The official kubernetes documentation covers these in great detail so I will summarize here.


This lets you expose a service using the IP address of the Kubernetes node that your application is deployed to but it uses a random ( or static if you configure it as such) TCP port between 30000-32767. As an example, lets say you had a web application deployed that accepts traffic on port 80. If you configure a service to expose this application using nodePort then kubernetes would assign a random port number of 30176. This lets you access your application by browsing to the kubernetes node IP:30176. Every time you redeploy the service this port may change.


Services that use host networking configure a static port on the kubernetes node that your application is deployed to. So if you deployed your example web application to your node you could reserve port 80 on the node and all traffic to the node's IP on port 80 would be routed to your web application. This is all well and good until you have multiple web applications deployed on the same node that need port 80.


Until recently, Ingress was the best option if you deployed Kubernetes cluster on bare metal. Ingress lets you configure internal load balancing of HTTP or HTTPS traffic to your deployed services using software load balancers like NGINX or HAProxy deployed as pods in your cluster. Ingress makes use of Layer 7 routing of your applications as well. The problem with this is that it doesn't easily route TCP or UDP traffic. The best way to do this was using a LoadBalancer type of service. However, if you deployed your Kubernetes cluster to bare metal you didn't have the option of using a LoadBalancer. This was only available on cloud deployments making bare metal deployments second class deployments.

Until now.


There is a new project for Kubernetes called Metallb that changes all that. It is still in alpha but if you are looking to have the benefits of load balancing in your bare metal Kubernetes deployment then I recommend you give it a try.

Kubernetes Metallb

Getting Kubernetes Metallb installed and configured is relatively easy in the basic deployment. Mostly, all it requires is a pool of IPs that it can use to assign to your load balanced services.

Installing Kubernetes Metallb

Installation is a snap. Just run the following command to install Kubernetes Metallb:

$ kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.6.2/manifests/metallb.yaml

After it is completed you just need to configure it.

Configuring Metallb

We need to tell Metallb what IPs it can use. Create a new file called metallb.yml and add the following contents:

apiVersion: v1
kind: ConfigMap
  namespace: metallb-system
  name: config
  config: |
    - name: default
      protocol: layer2

Here we are telling it to use the pool of 10 IPs from - 250. If your network makes use of DHCP then make sure whatever pool you give to Metallb, you exclude those IPs from the DHCP pool of available IP's that it can assign. Otherwise, you might have network address conflicts and stuff will break.

Finally, apply the configuration to your Kubernetes cluster.

$ kubectl create -f metallb.yml

That is all there is to it. You now have load balancing enabled on your Kubernetes cluster! Let's go ahead and test everything out.

Testing Bare Metal Load Balancing

To test everything out we will need to deploy an example application. Let's deploy a sample application like we normally would:

$ kubectl run nginx --image=nginx --port=80
$ kubectl expose deployment nginx --type=LoadBalancer --name=nginx-service

Our test application is now deployed. Let's get more information about our newly created service.

kubectl describe service nginx-service
Name:                     nginx-service
Namespace:                default
Labels:                   run=nginx
Selector:                 run=nginx
Type:                     LoadBalancer
LoadBalancer Ingress:
Port:                       80/TCP
TargetPort:               80/TCP
NodePort:                   32542/TCP
Session Affinity:         None
External Traffic Policy:  Cluster
  Type    Reason       Age   From                Message
  ----    ------       ----  ----                -------
  Normal  IPAllocated  11s   metallb-controller  Assigned IP ""

As you can see Kubernetes assigned the IP of to our service! Test it out by browsing to the address.

And since services keep the IPs that they are assigned you can configure your internal DNS servers to resolve a hostname to the IP.


Thanks for reading this article. If you liked it, please comment below.

Take the Chaos Out of Container Monitoring. View the webcast on-demand!

performance ,load balancing ,kubernetes ,deployment

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}