Managing Kubernetes Ingresses
Managing Kubernetes Ingresses
Between your clusters and the rest of the internet is the Kubernetes Ingress.
Join the DZone community and get the full member experience.Join For Free
Kubernetes has built on Docker’s containerization success to further transform the way server environments are set up. Rather than relying on a single physical server or a cloud instance, web services and apps can now run across a multi-cloud environment. Cloud provider networking capabilities the flexibility offered by Kubernetes even further, allowing developers to have private and public instances/services in a proper network architecture implementation.
When we have pods and Kube services running in private subnets though, these are not instantly accessible from the internet. Kubernetes has an extra layer between your services and the internet. That layer is Ingress. For us to utilize those private cluster resources from the external world, we need to set up an ingress controller and ingress rules.
Ingress defines routing rules connecting external HTTP and HTTPS requests with internal services and individual pods. It is a layer that handles traffic routing in the most comprehensive way. The ingress controller is essentially an edge router sitting in front of our Kube services (to load balance requests) that is going to satisfy those defined ingress rules. Similar to other parts of Kubernetes, there are multiple tools and solutions available for managing Ingress and simplifying control.
The Default Ingress
Ingress functions the way standard server routing functionalities work. You define a series of rules that dictate how external traffic must be directed to internal services within the cluster. To make sure that all traffic gets routed, there is also a default rule included in the Ingress configuration.
An Ingress with no rules—or requests that don’t match configured rules—will be automatically sent to the default backend. The default backend is configured as part of the Ingress controller rather than a rule of its own; it acts as a failsafe for when no rules are defined. Of course, you can configure the default backend according to your specific needs.
More About the Rules
To direct traffic a certain way, you can define specific Ingress rules and add them to your Ingress resources. Each rule has three main elements that can be configured to achieve specific results. Those elements are:
- The host or IP address: You can define a host to direct traffic from a certain URL (i.e. app.yourdomain.com) rather than traffic coming from the defined IP address. The use of hosts allows you to be more specific with how each incoming traffic is routed. It also allows microservices to run from their own pods and only get accessed when specific URLs are accessed.
- Paths: Paths (i.e. /login) in an Ingress resource is used to direct traffic correctly. There are also variables, serviceName and servicePort, that need to be defined in order for the traffic to be routed correctly. These configurations are applied before additional functions such as load balancing for better efficiency.
- Backends: This is where you determine where traffic matching the configured host and path must be routed. You can use service and port names designed to match the host and path of the rule to connect traffic with the correct services.
It is easy to see how Ingress rules can be very detailed. It allows for maximum flexibility, all while maintaining robustness of the Ingress layer and efficiency of the whole system.
Ingress in Implementation
There is a limitless possibility when it comes to Ingress in Kubernetes. You can be very specific with how each incoming traffic gets routed and how services react to it. Still, there are some common practices seen across multiple implementations.
The first one is known as Single Service Ingress. As the name suggests, the Ingress rule directs all traffic—from any host and IP address attached to the environment—to a single service. This is the common Ingress approach for when you have the entire application running as a single service. It is also the default Ingress method when you use
kubectl create -f .
Next, we have the fanout. A fanout Ingress routes traffic to the right services based on pre-defined parameters. This allows users to access different services using different paths; for example, /login can be routed to
service1 , while /news gets routed to
service2 . This too is a simple way to attach paths or hosts to a specific service.
Ingress also supports name-based virtual hosting. This means traffic to the same IP address can be routed to multiple host names at that same address. What’s interesting is that Ingress can still match incoming traffic without any host. This means you can setup failsafe for the name-based virtual hosting and have a catch-all service handling all unmarked incoming traffic.
Don’t forget that Ingress also supports TLS. Even when you add a layer of load balancer to the Ingress configuration, Ingress can still protect data transmission from client to server using a valid certificate.
Load Balancing Your Traffic
One of the reasons why Ingress is so popular is its support for load balancing. Ingress can handle routing traffic to multiple clusters for better performance. Once again, the service layer offers freedom when it comes to configuring how load balancing works for your application.
You can rely on the load balancing algorithm to really push the performance of your cluster. There is also a backend weight scheme enabling better load balancing when you have limited resources or resource-intensive services requiring more attention.
With Kubernetes, the use of
LoadBalancer depends on the cloud infrastructure you use. Some cloud providers like Google Cloud already adds a proprietary load balancing routine to the Ingress. Others support
LoadBalancer as is, giving you complete control over how incoming traffic gets routed to individual nodes.
In fact, the load balancer can be configured as a stable endpoint for external traffic. When configured correctly, IP addresses attached to the load balancer become the gateway for Kubernetes Ingress.
Another benefit is that instead of exposing every Kube service with a load balancer (which can increase your costs if you have one load balancer per Kube service which you expose to the internet), with Ingress/Ingress controllers you have only one load balancer serving traffic to several services according to the Ingress rules.
Regardless of the way you configure Ingress for your Kubernetes environment, it is worth noting that Ingress is a continuing work-in-progress. It is a relatively new feature that continues to make using Kubernetes in development and production easier, and I’m sure we’ll see more advances in the future.
This post was originally published here.
Published at DZone with permission of Juan Ignacio Giro . See the original article here.
Opinions expressed by DZone contributors are their own.