DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Argo Rollout and Deployment Strategies
  • Introduction to Container Orchestration
  • GitOps: Flux vs Argo CD
  • How a Service Mesh Impacts Your Kubernetes Costs

Trending

  • Kubeflow: Driving Scalable and Intelligent Machine Learning Systems
  • Beyond Linguistics: Real-Time Domain Event Mapping with WebSocket and Spring Boot
  • Docker Model Runner: Streamlining AI Deployment for Developers
  • A Guide to Developing Large Language Models Part 1: Pretraining
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. How To Expose a Kubernetes Service Using an Ingress Resource

How To Expose a Kubernetes Service Using an Ingress Resource

In this post, learn what a Kubernetes Ingress resource is, its uses in an application context, and get a big-picture look at Kubernetes Ingress and Ingress Controller.

By 
Saurabh Dashora user avatar
Saurabh Dashora
DZone Core CORE ·
Jul. 21, 23 · Tutorial
Likes (4)
Comment
Save
Tweet
Share
10.0K Views

Join the DZone community and get the full member experience.

Join For Free

Ingress means the act of going in or entering. Also, a means or place of entry or entryway.

That’s the job of a Kubernetes Ingress Resource.

The primitive approach of exposing a service to the outside world involves a Kubernetes NodePort service to the outside world. 

For reference, a Node Port service is a special type of service in Kubernetes.

For this service type, each cluster node opens a port on the node itself. Any incoming traffic received on that port is directed to the underlying service and the associated pods.

But what makes the Kubernetes Ingress resource special?

That’s because Ingress does so much heavy lifting in terms of features such as:

  • Load balancing
  • SSL termination
  • Name-based hosting
  • Operating at the application layer of the network
  • Support for multiple services with a single IP address

Here’s the big-picture view of the Kubernetes Ingress resource.

Kubernetes Ingress

As you can see, one Ingress resource can act as the gatekeeper for multiple services. Each of the services can be backed by multiple pods running on their own nodes. For a client, none of these details matters as it will only be communicating to the Ingress resource.

Let’s look at setting one up as a demo and see it in action.

1. The Role of the Ingress Controller

The Ingress resource doesn’t work on its own.

You need an Ingress controller running within your cluster. Think of this controller as the brains behind the whole Ingress magic.

Now, the Ingress controller isn’t a straightforward matter, either. Different Kubernetes environments provided by vendors use different implementations of the controller. Some don’t even provide a default controller at all.

Anyways, that’s not the scope of this post. For our demo purpose, you can check whether an Ingress controller is already present by checking all the pods within the cluster.

Shell
 
$ kubectl get po --all-namespaces


I’d be looking for something like this:

Shell
 
ingress-nginx   ingress-nginx-controller-555596df87-4p46d   1/1     Running     1 (159m ago)     2d2h


If nothing is there, don’t fret. You can always install it as an add-on.

Here’s the command to install the ingress-nginx controller. It’s a particular implementation of the Ingress controller that works well in most cases.

Shell
 
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/cloud/deploy.yaml


When you execute the above command, a bunch of resources get created.

Shell
 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created


Wow, that’s a lot of things to make Ingress work. But I promise it’ll be worth it.

Check for the ingress-nginx pod again and give a moment for the necessary pod to start running.

With the setup out of the way, it’s time to create the Ingress resource.

2. Creating the Kubernetes Ingress Resource

Below is the YAML manifest for a brand-new Ingress resource:

YAML
 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-demo
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    kubernetes.io/ingress.class: "nginx" 
spec:
  rules:
  - host: kubernetes.docker.internal
    http:
      paths:
      - path: /demo
        pathType: Prefix
        backend:
          service:
            name: nodeport-demo
            port:
              number: 80


I hope you want to understand what’s going on over here.

Here’s the breakdown:

  • The apiVersion and kind are pretty self-explanatory as we are trying to tell Kubernetes what type of resource we want to create.
  • Next up, there is the metadata section. It has a name field for the Ingress resource. Then, we have the annotations section.
  • The annotation nginx.ingress.kubernetes.io/rewrite-target specifies the Target URI where the incoming traffic must be redirected. It’s important property so don’t miss it.
  • The next one i.e. kubernetes.io/ingress.class is meant to link the Ingress resource with the Ingress controller. Hence the value ‘nginx’.
  • Moving on, we have the spec section. It’s the place where the Ingress magic happens as we specify the rules that will govern the routing.
  • Within the rules section, you have the host. I have used kubernetes.docker.internal as this is something that’s available out of the box. Within the host, we have the http section that contains a list of paths.
  • For each path, you need to specify the path value, its type, and the corresponding backend service name.
  • In the above example, I’m pointing /demo path to a service named nodeport-demo available on port 80.

And that’s basically all that is needed.

Note that if you use any other host such as demo.example.com, you need to make appropriate changes to the DNS so that it resolves the domain name to the IP of the Ingress controller. If you are trying this out locally on something like Docker Desktop, you can directly use kubernetes.docker.internal as the hostname.

An important point to remember is that we can also use the same Ingress to expose multiple services.

Also, in case you are looking for the definition of the Node Port service, here’s the YAML for that as well.

YAML
 
apiVersion: v1
kind: Service
metadata:
  name: nodeport-demo
spec:
  type: NodePort 
  ports:
  - port: 80
    targetPort: 3000
    nodePort: 30100
  selector:
    app: hello-service


Once you have applied the resources and made changes to the DNS if needed, you can actually see the Ingress in action.

You can go to your browser and visit the URL http://kubernetes.docker.internal/demo and if there is a proper backing application, you’ll see the response.

3. How the Kubernetes Ingress Actually Works

Though things may be working fine, it’s also important to understand how the wheel actually turns.

And there are a few interesting things about how the Kubernetes Ingress actually works.

Here’s an illustration showing the same.

How the Kubernetes Ingress Actually Works

The below steps can help you figure it out:

  • The client first performs a DNS lookup of the hostname from the DNS server and gets the IP address of the Ingress controller.
  • Then, the client sends an HTTP request to the Ingress controller with the hostname in the Host header.
  • The controller determines the correct service based on the hostname, checks the Kubernetes Endpoints object for the service, and forwards the client’s request to one of the pods.
  • Note that the Ingress controller doesn’t forward the request to the service. It only uses the service to select a particular pod.

Conclusion

That’s all for this post. 

But don’t think that Kubernetes Ingress is done and dusted. 

There are a lot of other use cases such as exposing multiple services or enabling TLS support.

Kubernetes Load file Software deployment

Published at DZone with permission of Saurabh Dashora. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Argo Rollout and Deployment Strategies
  • Introduction to Container Orchestration
  • GitOps: Flux vs Argo CD
  • How a Service Mesh Impacts Your Kubernetes Costs

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!