How to Use the Open Source Tool Traefik to Direct Kubernetes Traffic
How to Use the Open Source Tool Traefik to Direct Kubernetes Traffic
In this article, we discuss how to use the open source tool, Traefik, to better direct Kubernetes traffic.
Join the DZone community and get the full member experience.Join For Free
This tutorial will show you how to ingress traffic from external sources into a Kubernetes-Raspberry Pi cluster by using the open-source cloud-native edge router, Traefik. I’ll then detail how you can clean up Kubernetes resources afterward.
First things first, you’ll need a k3s Raspberry Pi cluster like the one I've detailed here. Your cluster will also need internet connectivity in order to access online images. All of the sample configuration and HTML files used below are available for download from GitLab.
In this tutorial we’ll create a simple site, beginning with the YAML files required for a deployment configuration. An easy way to do this is to start with samples in the Kubernetes documentation and then modify them. That’s the case with the example below – it’s based on a sample Kubernetes deployment doc.
Make the file mysite.yaml and include this code:
This code names the deployment, mysite-nginx, and applies the same as an app label. Setting replicas to 1 will create a single pod. There’s also just one container, named nginx. The image is also nginx.
When deployed, k3s download the nginx image from Docker Hub and uses it to create a pod. The containerPort: 80 setting tells the pod to listen inside the container on port 80. It’s important to understand that the pod will only listen inside the container, with access restricted to an internal network. This enables multiple containers to use this configuration, each listening to its own port 80 without any conflict.
A service in Kubernetes is an abstraction that enables access to a pod or set of pods. A service will provide routing to a single pod, or load balancing to multiple pod replicas.
We’ll add the following code to the mysite.yaml configuration file to define the service. (Note that configuration areas are separated with ---.)
This code names the service mysite-nginx-service. Whereas previously we had created the mysite-nginx app label for the container, now we include the selector app: mysite-nginx. This line tells the service how to find the application container. This code also defines the service protocol as TCP and tells the service to listen on port 80.
The following ingress configuration code controls how outside traffic reaches services inside the cluster. K3s includes Traefik as a pre-configured ingress controller. Here, we’ll add Traefik-specific ingress configuration code to mysite.yaml, again separated with ---.
This configuration names the ingress record mysite-nginx-ingress. The kubernetes.io/ingress.class annotation tells Kubernetes that Traefik is the expected ingress controller. Under rules:, the code specifies any incoming HTTP traffic starting with the path/is to be routed to the backend service mysite-nginx-service at port 80.
We’re now finished creating the configuration we need.
Creating a Site to Deploy
Next, we need to create a custom page to deploy (deploying now would simply give the default nginx page). Create a file called index.html and enter the following code:
For this relatively simple example, we’re going to store the file in a Kubernetes config map – this is NOT a proper Kubernetes storage mechanism and shouldn’t be used with actual sites. Run this command:
Here, we create a configmap resource called mysite-html from the local file index.html. This approach stores a file or set of files in a Kubernetes resource, which can be called out in configuration. Config map resources are intended for storing configuration files (again, we’re misusing it for the sake of this example).
Next, we’ll mount the config map in our nginx container. This is a two-step process: 1) we’ll name a volume that uses the config map, and 2) we’ll mount the volume in the nginx container.
This code completes the first step and should be added in the mysite.yaml file under the spec label and after containers:
Here, we’re telling Kubernetes to define a volume named html-volume, containing the contents of our config map named html-volume.
Now, add the following code in the nginx container specification below ports:
This code tells Kubernetes to mount the volume html-volume in the nginx container at the path /user/share/nginx/html. We use this path because it’s the location from which the nginx image serves HTML. Mounting the volume here replaces the nginx default contents with our own.
Our configuration file’s deployment section now reads as follows:
Time to Deploy
The code is all set for deployment. Use this command:
In response, you’ll receive messages similar to this:
These messages confirm that Kubernetes has created the resources for our deployment, service, and ingress configurations.
To check the status of the pods, use this command:
If the status shows “ContainerCreating,” you’ll have to wait and run the command again. A delay on the first run is common, since k3s needs to download the nginx image in order to create the pod. When it is ready, you’ll receive the status “Running”.
Testing it Out
With the pod running, go ahead and open a browser. Navigate to “kmaster.”
This screen demonstrates that you’ve successfully deployed a site on your K3s cluster!
Adding Another Site
The entire k3s cluster is now running just one site. Ready to add another? For an example, let’s use a message that my dog wants to share with the world. I’ve helped her by placing this message into some HTML code, available in the samples zip file on GitLab.
We’ll repeat our technique of using a config map to store the site’s HTML. However, this time we’ll put the entire html directory in the config map, with this command:
Next, we’ll create this site’s configuration file and name it mydog.yaml. This file is very similar to the previous mysite.yaml:
For the most part, this file can be created by starting with mysite.yaml and replacing the term mysite with mydog. There are two other changes in the ingress section. The path is now /mydog, which tells Traefik to route all incoming requests with a path beginning with /mydog to the mydog-nginx-service. Any other paths will still receive routing to the mysite-nginx-service.
The other change is the additional annotation traefik.frontend.rule.type: PathPrefixStrip. This instructs Traefik to strip away the /mydog prefix before sending mydog-nginx-service the request. This is necessary because the mydog-nginx application doesn’t expect a prefix.
Now, we’re ready to deploy the configuration:
Doing so makes my dog’s important message to the world reachable at http://kmaster/mydog/:
The k3s cluster now hosts two sites, with Traefik directing service requests based on path names. Hostname-based routing is also available as an alternative to this path-based routing. These example sites are also using unencrypted HTML — introducing support to host SSL/TLS HTTPS sites from our k3s cluster would be an ideal addition as well.
Final Clean Up
As you proceed to utilize k3s and Traefik for your own sites, you’ll likely want to remove these examples from your cluster. In most cases, you can do so by running the delete command with the same configuration file used to deploy the sites.
The following commands clean up the mysite and mydog sites:
We’ll also clean up the config maps we manually created, like this:
To test if your deletions succeeded, run a kubectl get pods command:
This message means you have a clean canvas upon which to make your own sites with k3s and Traefik.
And there you have it – good luck!
Opinions expressed by DZone contributors are their own.