DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Selenium Grid 4 and Appium
  • The Production-Ready Kubernetes Service Checklist
  • 10 Best Practices for Managing Kubernetes at Scale
  • Optimizing Prometheus Queries With PromQL

Trending

  • Cookies Revisited: A Networking Solution for Third-Party Cookies
  • The Cypress Edge: Next-Level Testing Strategies for React Developers
  • Start Coding With Google Cloud Workstations
  • Measuring the Impact of AI on Software Engineering Productivity
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Ingress Controllers for Kubernetes

Ingress Controllers for Kubernetes

Several different ways to expose your microservices-based containerized apps and provides detailed instructions for implementing an ingress controller.

By 
Matt Jarvis user avatar
Matt Jarvis
·
Oct. 24, 18 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
17.6K Views

Join the DZone community and get the full member experience.

Join For Free

Kubernetes and DC/OS are a powerful combination for serving the apps that create great customer experiences. Once your microservices-based containerized apps are up and running, you’ll need to expose them to the outside world. This blog post explains several different ways to do this and provides detailed instructions for implementing an ingress controller.

Ingress Options Abound

Kubernetes handles East-West connectivity for services internally within our Kubernetes cluster by assigning a cluster-internal IP which can be reached by all services within the cluster. When it comes to external, or North-South, connectivity, there are a number of different methods we can use to achieve this.

Firstly, we can define a NodePort Service type. This exposes the service on a static port at each node’s IP. Any traffic sent to that port on a node is forwarded to the relevant service.

For platforms that support it, such as AWS or GCP, we can define a LoadBalancer service, which will configure an external load balancer specifically for our service. Whilst this works well on cloud platforms, it can also be very expensive, as we may end up with many load balancer services.

The most flexible option, although often the most confusing for new users, is the Ingress Controller. An Ingress Controller can sit in front of many services within our cluster, routing traffic to them and depending on the implementation, can also add functionality like SSL termination, path rewrites, or name-based virtual hosts. From an architecture perspective, ingress controllers are generally an automation layer integrated with a backend proxy service, and can generally operate at both layer 4 and layer 7.

There is a growing ecosystem of ingress controllers, some leveraging well-known load balancers and proxies, and some new cloud-native implementations. There are ingress controllers for most of the familiar tools in this space, like HAProxy and Nginx, alongside new Kubernetes native implementations like Ambassador and Contour, both of which leverage the Envoy proxy. There are also implementations to use other cloud-native proxy tools like Traefik and Kong, along with controllers which integrate with hardware load balancers such as those from F5.

Exposing Services on Kubernetes With the Nginx Ingress Controller

The ingress type is relatively new and the space is developing very rapidly, so, for the purposes of this post, we’re going to look at one of the most mature implementations, the Nginx ingress controller. It’s worth noting there are a couple of different implementations of an Nginx ingress controller, with the two most noteworthy being Nginx’s own implementation and the Kubernetes community implementation. Here’s a discussion of the differences between them.

We’re going to be using the community implementation, and here’s the code for it.

The first step we’re going to need to do is to deploy a default backend service. This is used by the Nginx ingress controller in the event that services are unavailable. The default backend image needs to satisfy two requirements :

  1. Serve a 404 page at /.
  2. Respond with a 200 for requests to /healthz. 

The ingress-nginx project provides us with a template yaml to use for this purpose, and the only change we’ve made from the defaults is to remove the namespace.

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: default-http-backend
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissible as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend

As we can see, this is going to deploy a single replica of the default backend container, which we are getting from gcr.io, and our service will route port 80 requests to port 8080 on the target.

Mattbook-Pro:ingress matt$ kubectl create -f default-backend.yaml 
deployment "default-http-backend" created
service "default-http-backend" created

Now we have our default backend, we can go ahead and deploy our Nginx ingress controller. This will deploy both Nginx and the additional code for Kubernetes to control it. In the interests of clarity, we’re going to deploy a very simple version of this configuration, it’s worth reading the docs to understand the full scope of the functionality.

apiVersion: extensions/v1beta1
kind: Deployment
metadata: 
  name: nginx-ingress-controller
spec: 
  replicas: 1
  revisionHistoryLimit: 3
  template: 
    metadata: 
      labels: 
        k8s-app: ingress-nginx
    spec: 
      containers: 
        - args: 
            - /nginx-ingress-controller
            - "--default-backend-service=$(POD_NAMESPACE)/default-http-backend"
          env: 
            - name: POD_NAME
              valueFrom: 
                fieldRef: 
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom: 
                fieldRef: 
                  fieldPath: metadata.namespace
          image: "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0"
          imagePullPolicy: Always
          livenessProbe: 
            httpGet: 
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            timeoutSeconds: 5
          name: nginx-ingress-controller
          ports: 
            - containerPort: 80
              hostPort: 80
              name: http
              protocol: TCP
      terminationGracePeriodSeconds: 60
      nodeSelector:
          kubernetes.dcos.io/node-type: public
      tolerations:
        - key: "node-type.kubernetes.dcos.io/public"
          operator: "Exists"
          effect: "NoSchedule"
---

Mattbook-Pro:ingress matt$ kubectl create -f nginx-controller-simple.yaml 
deployment "nginx-ingress-controller" created

There are a few things worth noting in this configuration, which are specific to deploying this in DC/OS. Firstly, we need to ensure that this is deployed to our public agents, which is defined by:

  nodeSelector:
          kubernetes.dcos.io/node-type: public

We have also defined some tolerations that define the behavior we want to ensure if that condition can’t be met. In this case, we are defining that unless we have a public node available, then we’re not going to schedule the deployment.

tolerations:
        - key: "node-type.kubernetes.dcos.io/public"
          operator: "Exists"
          effect: "NoSchedule"

We’ve also bound the deployment container directly to port 80 on the host it is deployed onto:

          ports: 
            - containerPort: 80
              hostPort: 80
              name: http
              protocol: TCP

We could define a service, and then have the ports allocated automatically by Kubernetes using a NodePort type, but DC/OS already exposes port 80 on public agents, so this simplifies our configuration. In a production environment, this may not be the best option, since we need to manually ensure that no other processes are using port 80 on any of our DC/OS public agents. This includes any scenario where you are running marathon-lb in its default configuration since marathon-lb binds to port 80 on public agents and so will prevent this ingress configuration from working correctly.

If we wanted to use the NodePort type, we would remove the hostPort line from the above configuration, and then define a service, which would look something like:

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
  selector:
      k8s-app: ingress-nginx

In the NodePort configuration we would need to find the ports the ingress controller is actually using by running :

Mattbook-Pro:ingress matt$ kubectl describe svc ingress-nginx | grep NodePort
Type:NodePort
NodePort:http31585/TCP

Once we have the port, we would need to ensure we can connect externally to those ports, by checking firewall rules and access controls. We would then connect to our endpoint using:

Mattbook-Pro:ingress matt$ curl 54.171.202.99:31585

Now back to our example implementation. Once our ingress controller is running, we can test connectivity to our public IP address. In order to find the public IPs of your DC/OS cluster, you can refer to the relevant documentation. If you’re running in AWS as I am, you can also use the handy DC/OS CLI extension for doing exactly this. I only have a single public agent in my cluster, so using the CLI extension only returns me one result.

Mattbook-Pro:ingress matt$ dcos dcos-aws-cli publicIPs
54.171.202.99

Once we have our public IP, we can use curl to connect to port 80 on this host.

Mattbook-Pro:ingress matt$ curl 54.171.202.99
default backend - 404

We can see from the response code that we’re hitting our default backend, which is exactly what the expected behavior is since we don’t yet have any ingress configuration.

Let’s go ahead and deploy a simple service we can use for our first test of the ingress controller. We’re going to use:

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
spec:
replicas: 2
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: echo-server
image: hashicorp/http-echo
args:
- -listen=:80
- -text="Hello from Kubernetes!"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
selector:
app: hello-world
ports:
- port: 80
targetPort: 80

Let's go ahead and deploy that :

Mattbook-Pro:ingress matt$ kubectl create -f helloworld.yaml
deployment "hello-world" created
service "hello-world" created

We now need to define an Ingress for the ingress controller to configure external access to our test application.

Mattbook-Pro:ingress matt$ cat helloworld-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80

In this configuration, we use an annotation to define which ingress class Kubernetes should use, we define the backend we should use, in, this case, our hello-world service, and we define the host, path, and port which the ingress should be used on.

Mattbook-Pro:ingress matt$ kubectl create -f helloworld-ingress.yaml
ingress "hello-world-ingress" created

In order to test this, we need to do some local configurations on our kubectl host to map api.example.com to the public IP of our DC/OS cluster. This will depend on your operating system, but on MacOS and Linux you will want to add an entry in your /etc/hosts file as below:

Mattbook-Pro:ingress matt$ cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
54.171.202.99 api.example.com

With that entry in our /etc/hosts file, we can now resolve api.example.com to the public IP of our DC/OS cluster, so we can use curl once again to test out the ingress rule.

Mattbook-Pro:ingress matt$ curl api.example.com
"Hello from Kubernetes!"

Success! We see that our request has been routed to the correct service. We can also test that if we use the IP address instead of the hostname, then we hit the default backend with a 404 response, as expected.

Mattbook-Pro:ingress matt$ curl 54.171.202.99
default backend - 404

Let’s add a second service, which does something slightly different:

Mattbook-Pro:ingress matt$ cat holamundo.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hola-mundo
labels:
app: hola-mundo
spec:
replicas: 2
selector:
matchLabels:
app: hola-mundo
template:
metadata:
labels:
app: hola-mundo
spec:
containers:
- name: echo-server
image: hashicorp/http-echo
args:
- -listen=:80
- -text="Hola de Kubernetes!"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: hola-mundo
spec:
selector:
app: hola-mundo
ports:
- port: 80
targetPort: 80

Mattbook-Pro:ingress matt$ kubectl create -f holamundo.yaml
deployment "hola-mundo" created
service "hola-mundo" created

And now let’s configure ingress to this on a different path to the same host:

Mattbook-Pro:ingress matt$ cat holamundo-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: holamundo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: api.example.com
http:
paths:
- path: /hola
backend:
serviceName: hola-mundo
servicePort: 80

Mattbook-Pro:ingress matt$ kubectl create -f holamundo-ingress.yaml
ingress "holamundo-ingress" created

Now when we curl our endpoints, we can see that on the root we have our first service, and on the /hola endpoint we have our new service.

Mattbook-Pro:ingress matt$ curl api.example.com
"Hello from Kubernetes!"
Mattbook-Pro:ingress matt$ curl api.example.com/hola
"Hola de Kubernetes!"

Let’s try something different, first let’s add another couple of host entries to our /etc/hosts file :

Mattbook-Pro:ingress matt$ cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
54.171.202.99 api.example.com
54.171.202.99 es.example.com
54.171.202.99 gb.example.com

And now let’s add some host-based routing to our ingress configuration:

Mattbook-Pro:ingress matt$ cat hosts-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hosts-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: gb.example.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
- host: es.example.com
http:
paths:
- path: /
backend:
serviceName: hola-mundo
servicePort: 80

Mattbook-Pro:ingress matt$ kubectl create -f hosts-ingress.yaml
ingress "hosts-ingress" created

With this configuration, we basically have name-based virtual hosts, where, depending on the host requested, we route the traffic differently. Here we are just using the root path, but of course, we can also combine this with paths for a ton of flexibility.

Mattbook-Pro:ingress matt$ curl gb.example.com
"Hello from Kubernetes!"
Mattbook-Pro:ingress matt$ curl es.example.com
"Hola de Kubernetes!"

We can even do path rewrites in our ingress rules:

Mattbook-Pro:ingress matt$ cat rewrite.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rewrite
annotations:
ingress.kubernetes.io/rewrite-target: /old
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: "api.example.com"
http:
paths:
- path: /new
backend:
serviceName: hello-world
servicePort: 80

Using the ingress.kubernetes.io/rewrite-target annotation, we can ensure that any requests to /old will be rewritten to direct to /new.

I’ve actually only covered a small subset of the functionality that the Nginx ingress controller can provide, including sticky sessions, TLS termination, and proxy protocol, so it’s well worth digging into the docs for further reading.

Ingress Controllers Expose Apps Running in Kubernetes On DC/OS

Hopefully, we can now see that ingress controllers provide us with a powerful resource to control access to our applications running in Kubernetes on DC/OS. This is a space that’s moving very fast, so we’ll likely see more innovation around ingress over the coming months, and I’ll be trying to cover more of the options for ingress into DC/OS in future blog posts.

Kubernetes Host (Unix) cluster Implementation

Published at DZone with permission of Matt Jarvis, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Selenium Grid 4 and Appium
  • The Production-Ready Kubernetes Service Checklist
  • 10 Best Practices for Managing Kubernetes at Scale
  • Optimizing Prometheus Queries With PromQL

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: