Kubernetes FAQ: How Can I Route Traffic for Kubernetes on Bare Metal?
Kubernetes FAQ: How Can I Route Traffic for Kubernetes on Bare Metal?
When it comes to containers, not everyone is using them in the cloud. We take a look at how you can use Kubernetes on bare metal to route traffic.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
We recently conducted an unscientific poll amongst three hosts of the Kubernetes community call: Jorge Castro (@castrojo), Ilya Dmitrichenko (@errordeveloper) and Bob Killen (@mrbobbytables) and we asked them what the recurring Kubernetes topics were. The results of that poll are a list of the most frequently asked questions about running Kubernetes in production. Stay tuned for a deep dive into the answers to these questions with the goal of providing you with a good jumping off point in your own research.
“A Kubernetes on bare metal question that comes up quite frequently is less about how to install it and more about configuration that is unique to a bare metal or on-premise installation of Kubernetes”, says Bob Killen. There are two pain points that are an issue for most people wanting to install a cluster on bare metal:
- Routing traffic into the cluster, for example, configuring services such as a load balancer and other related services that can access your services.
- Configuring the storage for any site you need to set up on bare metal.
To start this off, let’s look into the best way to configure traffic routing for Kubernetes on bare metal. (We’ll address the storage question in a future blog.)
Routing Traffic to Kubernetes on Bare Metal
If you are using one of the public clouds like GCP or AWS, routing traffic to your Kubernetes cluster is relatively straightforward where you can easily add on one of their convenient services. And if you are using one of the managed Kubernetes services like GKE, it’s even easier to expose a service where there is a built-in ingress controller.
One of the main problems is that most standard out-of-the-box load balancers can only be deployed to a public cloud provider and are not supported for on-premise installations.
There has been some movement toward better support for on-premise installs with recent projects like MetalLB, an on-premise load balancer. Other available options for on-premise traffic routing include NGINX, or you can manually configure TCP to also do round-robin DNS type load balancing.
Methods to Route Traffic - Pros and Cons
There are several ways to access your services within a cluster. Below we have listed the recommended methods and what the pros and cons of each are:
The clusterIP provides an internal IP to individual services running on the cluster. On its own this IP cannot be used to access the cluster externally; however, when used with
kubectl proxy you can start a proxy server and access a service. This method, however, should not be used in production.
Good for quick debugging. In fact, the only time you should use this method is if you’re using an internal Kubernetes or other service dashboard or you are debugging your service from your laptop.
Since this method requires you to run kubectl as an authenticated user, it is recommended that you don’t use this method in production. Doing so will expose your service to the internet and could, therefore, risk the security of your entire cluster.
This is the most rudimentary way to open up traffic to a service from the outside. It involves opening a specific port on your node. Any traffic sent to that port is then forwarded to the service. If you use a NodePort and you don’t specify a particular port for
NodePort in the YAML file, then Kubernetes will pick a random port. As a general rule, it’s best to always let Kubernetes pick its own port.
Provides quick access to your service and is suitable for running a demo app or a service that is not in production.
There are many downsides to this method: You can only specify one service per port, only ports between 30000 - 32767 can be used, and if the IP of your machine changes your services will be inaccessible.
An Ingress is a collection of rules that allow inbound connections to reach the cluster services that acts much like a router for incoming traffic. Ingress is http(s)-only but it can be configured to give services externally-reachable URLs, load balance traffic, terminate SSL, offer name-based virtual hosting, and more. In Kubernetes, Ingress comes pre-configured for some out of the box load balancers like NGINX and ALB, but these, of course, will only work with public cloud providers. Your option for on-premise is to write your own controller that will work with a load balancer of your choice.
Flexible architecture and can be completely customized to suit your needs.
Supports HTTP rules on the standard HTTP ports 80, 443 only. You will need to build your own ingress controller for your on-premise load balancing needs which will result in a lot of extra work that needs to be maintained.
A load balancer can handle multiple requests and multiple addresses and can route, and manage resources into the cluster. This is the best way to handle traffic to a cluster. But most commercial load balancers can only be used with public cloud providers which leave those who want to install on-premise short of services.
But now with the recently released MetalLB, it’s possible to deploy a load balancer on-premise or by following the instructions from NGINX you can set up a TCP or UDP round-robin method of load balancing.
Scales with your website by efficiently re-distributing resources as your traffic increases and can handle multiple addresses and requests.
Out of the box solutions for on-premise load balancing is in alpha and therefore often requires a hand-forged and/or more complex set up.
Not highlighted here are HostNetwork and HostPort, both of which can also be used to access services. These addresses are best left up to Kubernetes to manage. If you need to access a service for debugging purposes, the Kubernetes docs suggest you use NodePort:
“Don’t specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a HostPort, it limits the number of places the Pod can be scheduled, because each <hostIP, hostPort, protocol> combination must be unique. If you don’t specify the HostIP and protocol explicitly, Kubernetes uses 0.0.0.0 as the default hostIP and TCP as the default protocol.”
(from: Kubernetes documentation)
Resources to consult:
- Accessing Kubernetes Pods from Outside of the Cluster
- NGINX for ingress (uses NodePort)
- MetalLB: a load balancer for bare metal Kubernetes clusters
- Open issue for load balance support on bare metal
- Understanding Kubernetes Networking: Ingress
Given the dynamic nature of Kubernetes, and also for security reasons, it is generally best to use an ingress controller with a load balancer as a standard method in which you can access your services. For bare metal, you may have to write your own ingress controller, depending on the load balancer or you can check out MetalLB.
Published at DZone with permission of Anita Buehrle , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.