{{announcement.body}}
{{announcement.title}}

Setting Up an NGINX Ingress Controller on PMKFT

DZone 's Guide to

Setting Up an NGINX Ingress Controller on PMKFT

Take a look at how to set up and expose an NGINX Ingress Controller using Platform9 and installing it using Helm.

· Cloud Zone ·
Free Resource

The vast majority of Kubernetes clusters are used to host containers that process incoming requests from microservices to full web applications. Having these incoming requests come into a central location, then get handed out via services in Kubernetes, is the most secure way to configure a cluster. That central incoming point is an ingress controller.

The most common product used as an ingress controller for privately-hosted Kubernetes clusters is NGINX. NGINX has most of the features enterprises are looking for, and will work as an ingress controller for Kubernetes regardless of which cloud, virtualization platform, or Linux operating system Kubernetes is running on.

If you do not yet have a Platform9 Managed Kubernetes Free Tier (PMKFT) account, sign up here.

First Steps

The first step required to use NGINX as an Ingress controller on a Platform9 Managed Kubernetes cluster, is to have a running Kubernetes cluster. Running Kubernetes cluster

In this case the cluster we will be using is called "ingress-test" and it is listed as healthy. It is a single node cluster running on an Ubuntu 16.04 server.

Shell
 




x
11


 
1
% ssh root@64.227.56.189
2
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.4.0-173-generic x86_64)
3
root@pmkft:~# kubectl get nodes
4
NAME            STATUS   ROLES    AGE   VERSION
5
64.227.56.189   Ready    master   10h   v1.14.8
6
root@pmkft:~# kubectl get namespaces
7
NAME              STATUS   AGE
8
default           Active   11h
9
kube-node-lease   Active   11h
10
kube-public       Active   11h
11
kube-system       Active   11h


Running kubectl get nodes and kubectl get namespaces confirms that authentication is working, the cluster nodes are ready, and that there are no NGINX Ingress controllers configured.

Mandatory Components for an NGINX Ingress Controller

An ingress controller, because it is a core component of Kubernetes, requires configuration to more moving parts of the cluster than just deploying a pod and a route.

In the case of NGINX, its recommended configuration has three ConfigMaps:

  • Base Deployment
  • TCP configuration
  • UDP configuration

A service account to run the service is within the cluster, and that service account will be assigned a couple of roles.

A cluster role is assigned to the service account, which allows it to get, list, and read the configuration of all services and events. This could be limited if you were to have multiple ingress controllers. But in most cases, that is overkill.

A namespace-specific role is assigned to the service account to read and update all the ConfigMaps and other items that are specific to the NGINX Ingress controller's own configuration.

The last piece is the actual pod deployment into its own namespace to make it easy to draw boundaries around it for security and resource quotas.

The deployment specifies which ConfigMaps will be referenced, the container image and command line that will be used, and any other specific information around how to run the actual NGINX Ingress controller.

NGINX has a single file they maintain in GitHub linked to from the Kubernetes documentation that has all this configuration spelled out in YAML and ready to deploy.

To apply this configuration, the command to run is:

Shell
 




xxxxxxxxxx
1


 
1
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/deploy/static/mandatory.yaml


Which will generate the following output:

YAML
 




xxxxxxxxxx
1
11


 
1
namespace/ingress-nginx created
2
configmap/nginx-configuration created
3
configmap/tcp-services created
4
configmap/udp-services created
5
serviceaccount/nginx-ingress-serviceaccount created
6
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
7
role.rbac.authorization.k8s.io/nginx-ingress-role created
8
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
9
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
10
deployment.apps/nginx-ingress-controller created
11
limitrange/ingress-nginx created


Exposing the NGINX Ingress Controller

Once the base configuration is in place, the next step is to expose the NGINX Ingress Controller to the outside world to allow it to start receiving connections. This could be through a load-balancer like on AWS, GCP, or Azure. The other option when deploying on your own infrastructure, or a cloud provider with less capabilities, is to create a service with a NodePort to allow access to the Ingress Controller.

Using the NGINX-provided service-nodeport.yaml file, which is located in GitHub, will define a service that runs on ports 80 and 443. It can be applied using a single command line, as done before.

Shell
 




xxxxxxxxxx
1


 
1
root@pmkft:~# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/deploy/static/provider/baremetal/service-nodeport.yaml
2
service/ingress-nginx created


The final step is to make sure the Ingress controller is running.

Install via Helm

Platform9 supports Helm 3 and is available to anyone who wants to deploy using that method, which is often much easier to manage.

To install an NGINX Ingress controller using Helm, use the chart stable/nginx-ingress, which is available in the official repository. To install the chart with the release name ingress-nginx: 

Shell
 




xxxxxxxxxx
1


 
1
helm install stable/nginx-ingress --name ingress-nginx


If the Kubernetes cluster has RBAC enabled, then run: 

Shell
 




xxxxxxxxxx
1


 
1
helm install stable/nginx-ingress --name ingress-nginx --set rbac.create=true


Exposing Services Using NGINX Ingress Controller

Now that an ingress controller is running in the cluster, you will need to create services that leverage it using either host, URI mapping, or even both.

Sample of a host-based service mapping through an ingress controller using the type "Ingress":

Using a URI involves the same basic layout, but specifying more details in the "paths" section of the yaml file. When TLS encryption is required, then you will need to have certificates stored as secrets inside Kubernetes. This can be done manually or with an open source tool like cert-manager. The yaml file needs a little extra information to enable TLS (mapping from port 443 to port 80 is done in the ingress controller):

Next Steps

With a fully-functioning cluster and ingress controller, even a single node one, you are ready to start building and testing applications just like you would in your production environment, with the same ability to test your configuration files and application traffic routing. You just have some capacity limitations that won't happen on true multi-node clusters.

Topics:
ingress ,ingress controller ,ingress controllers ,kubernetes ,kubernetes cluser ,nginx ,nginx tips

Published at DZone with permission of Kamesh Pemmeraju , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}