{{announcement.body}}
{{announcement.title}}

Exposing Microservices in Kubernetes Clusters

DZone 's Guide to

Exposing Microservices in Kubernetes Clusters

What is just below the service?

· Microservices Zone ·
Free Resource

Jumping joy at the idea of learning about Kubernetes!

Jumping joy at the idea of learning about Kubernetes!


Kubernetes (k8s) natively facilitates service discovery through the DNS service that comes right out of the box. There is no extra setup that is required to access the microservices within the cluster and this helps the architect focus on the service definitions and dependencies without worrying about the network setup between the microservices. We might find ourselves in situations where we want ways to expose the microservices in non-traditional ways and I present options to handle the more advanced setups.
Jumping joy at the idea of learning about Kubernetes!You may also like: K8s KnowHow: Using A Service

NodePort

The typical use case for NodePort involves the access of a microservice at a particular port across all the nodes in a cluster. Kubernetes master will allocate a port from the range 30000 to 32767, and each kubernetes Node will proxy that port into the service. This is a relative hack approach and generally should only be considered if the traditional options don’t work out.

Why would someone want to create a NodePort service type? There are times when clients outside the cluster want to access a particular service that is not intended for public access. Setting up a NodePort and using the IP of the nodes to access the service does the trick. Sometimes it’s a stopgap solution in an infrastructure that is evolving.

Example NodePort service-config.yaml


Your Service configuration file must contain the following:

  • The type NodePort and port fields
  • Additionally, you can set the spec.ports[*].nodePort if you want a specific port number. If the spec.ports[*].nodePort field is not specified, a random port will be assigned within the flag-configured range.

Internal Load Balancer

An internal load balancer is useful in cases where we want to expose the microservice within the Kubernetes cluster and to compute resources within the same virtual private cloud (VPC). We run into this use case when the infrastructure is a combination of microservices and non-microservice systems (i.e mixed) or if we are in the process of migrating the system design to a microservices-based architecture.

Internal LoadBalancer Illustration


In this example, we have a service deployed in the k8s cluster B for which we have set up an internal load balancer. This will make the service available to the entire region (us-central in this case) and therefore the service can be accessed from k8s cluster A. On the other hand, the load balancer would not resolve for compute instances in k8s cluster C since it is in a different region, even though, all the clusters are part of the same VPC.

Example Internal LoadBalancer service-config.yaml


Your Service configuration file must contain the following:

  • The type LoadBalancer and port fields.
  • An annotation, cloud.google.com/load-balancer-type: “Internal”, which specifies that an internal load balancer is to be configured.
  • loadBalancerIP field enables you to choose a specific IP address within the subnet. The IP address must not be in use by another internal load balancer or service. If the loadBalancerIP field is not specified, an ephemeral IP is assigned to the LoadBalancer.

Some of the caveats for this service type are:

  • K8s Master nodes must be running version 1.7.2 or higher.
  • Internal load balancers are only accessible from other services within the same network and region.
  • A maximum of 50 internal load balancer forwarding rules is allowed per network.
  • We cannot set up ILBs if you already have existing ingress of type UTILIZATION pointing to the services in the cluster.

External Load Balancer

Exposing your public-facing interfaces (ex: the frontend of a web application) can be achieved by the external load balancer option. Setting one allocates an external IP to your now publicly available service.

External LoadBalancer Illustration


In this example, we have a service deployed in multiple regions on different clusters with an external load balancer configured to route the traffic based on the user region and availability. Requests from the us-central region will be redirected to the us-central region k8s clusters and in case of the node failure, the traffic gets diverted to the other available regions.

Example External LoadBalancer service-config.yaml


Your Service configuration file must contain the following:

  • The type LoadBalancer and port fields.
  • loadBalancerIP field enables you to choose a specific IP address. If the loadBalancerIP field is not specified, an ephemeral IP is assigned to the LoadBalancer.

References

Kubernetes Documentation: https://kubernetes.io/docs/concepts/services-networking/service


Further Reading

A Beginner's Guide to Kubernetes

DZone Refcard: Monitoring Kubernetes

Topics:
microservice architecture ,kubernetes ,microservices ,nodeport ,load balancer

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}