DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Working With Microservices and Kubernetes

Working With Microservices and Kubernetes

Microservices work great in containers; learn how to set it up with Kubernetes.

Juan Ignacio Giro user avatar by
Juan Ignacio Giro
·
Dec. 20, 18 · Tutorial
Like (5)
Save
Tweet
Share
20.38K Views

Join the DZone community and get the full member experience.

Join For Free

The fundamental concept behind a microservices-based architecture is also its biggest advantage. Rather than having a single monolithic app running everything internally, it is possible to set up multiple microservices – each handling its own functions and specific tasks – and have that big lump of an app divided into small, manageable, independently deployable services. This approach allows the app itself to be more reliable; failure in one service doesn’t affect the entire system if built right.

Microservices are great in containers, and Kubernetes offers one of the best environments of them all for container orchestration. Once again, it is due to the way Kubernetes is designed that makes the ecosystem perfect for apps that have a lot of microservices.

Microservices in Kubernetes

Before we can explore how the two work really well together, we need to take a closer look at Kubernetes itself. In a more conventional container setup, you use containers to separate certain functions within a server or one environment.

Using LAMP as an example, you can put MySQL in one container, and the web server and PHP in another. The two containers can then communicate with each other, creating a usable web server capable of running PHP-based apps. With this setup, however, production-grade features like scaling and fault tolerance becomes incredibly complex.

Kubernetes simplifies the whole thing. Pods function the way standard containers do, and Kubernetes runs as an independent unit. What’s different is the way you can run Pods across multiple servers in a cluster. Other than that, Pods are as easy to set up as standard containers.

It is then up to Deployments to govern how each Pod instance behaves. To top it all off, you can add Kubernetes Services to control traffic to groups of Pods, all without having to worry about where the Pods are located and how many Pods you need to control. Services handle things such as port management and load balancing.

So, how does the way Kubernetes functions help microservices orchestration?

With an environment made for microservices, the possibilities are endless. You can have functions such as authentication, API gateway, data management, and other small–well, microservices–all run from their own Pods. Implementing a complex structure of microservices becomes straightforward.

Here’s another big advantage of pairing microservices with Kubernetes: unrivaled flexibility. Each Pod and the services in it can be developed, tested, and maintained by separate teams remotely. They can be scaled up or down independently. You can even set up the API gateway to allow changes to requests and formats.

Getting Started With Microservices and Kubernetes

A system that uses microservices is never simple, hence setting up one using containers is equally complex. That said, there are some basic steps you can take in order to get the right environment for your microservices up and running in no time.

You can start by setting up and running Kubernetes on your system. So, Minikube would be the quickest way to go. It makes it easy to run a single-node Kubernetes cluster locally and is ideal for users looking to try out Kubernetes or develop with it day-to-day.

There are several ways to deploy Kubernetes clusters in a cloud-based environment:

  • With native cloud-provider tools (e.g., GKE, AKS, and EKS)
  • Or using third-party tools (like Canonical conjure-up or Kops)

Once we have a Kube cluster up and running, it’s time to deploy the microservices (each one encapsulated in its own container) on top of it. Through yaml files, we describe the desired state of every Kubernetes object in the cluster (Pods, deployments, services, etc.). Then, Kubernetes will work to make the current cluster state of those objects match the desired state we defined. By performing a variety of automated tasks to achieve it.

Kube Objects

Some of the main Kube objects we need to be aware of when working with microservices are:

> Pods: The smallest deployable unit on a node. It’s a group of one or more containers which must run together. A Pod usually contains one container.
This object represents your microservice running on K8s.

> ReplicaSet: Controls how many identical copies of a pod should be running somewhere on the cluster.

> Deployment: An object that can represent an application module running on your cluster. When we create a deployment object, we set specifications like the container image to run (Pod), the number of replicas (ReplicaSet), and the deployment strategy to use when adding or removing Pods, etc.

> Services: Pods are ephemeral. They can be launched or killed at any time (e.g., when scaling up or down) and eternally being assigned new internal IPs. Service objects offer a well-known endpoint for Pods, also acting as load balancer too. For example, Pods that compose a frontend interact with the backends through the backend Service, meaning the frontend Pods don’t need to be aware of which specific backend pod they are interacting with. The Service abstraction enables this decoupling.

There are several Service types, the most common being:

  • LoadBalancer: Which creates a load balancer in the cloud provider—useful to expose the Service-Pods to external components outside the cluster
  • ClusterIP: Which exposes the service on a cluster-internal IP, making the Service only reachable from within the cluster

> ConfigMaps: Objects which allow us to decouple configuration artifacts from the container image content to keep containerized applications portable. We can pass that configuration to Pods as config files they read or as environment variables.

> Secrets: Which are intended to hold sensitive data, such as passwords or keys. Putting this information in a secret is safer and more flexible than holding it in a container image.

Benefits in Every Use Case

There are many situations when microservices and Kubernetes is the perfect combination to use. The setup brings many additional benefits that conventional infrastructures don’t. At the top of that list of benefits, we have speed.
Microservices and Kubernetes offer the kind of velocity in both development and deployment that is difficult to match. The way Kubernetes is set up allows for better updates, faster deployments, and rapid iterations, all without taking the entire server down. You even have the option to update individual Pods or commit changes to your Deployments.

Kubernetes also handles iterations better. Rather than applying updates on top of the entire environment, you take a more immutable approach towards the system. When an update needs to be deployed, you create a new container image with distinct tags, push the new image to the corresponding container registry, and you just update the deployment definition by editing the container tag in the pod specification. Kubernetes will then automatically adjusts all replica sets according to the deployment strategy, making it possible to perform updates without affecting application availability. When the update doesn’t work as expected, switching back to the old version is easy.

And then there is the fact that the whole system is completely scalable. You can decouple components, scale individual parts of the system based on specific needs (i.e. expand the system’s database capabilities without changing the rest) and further use Services to boost flexibility of the entire system.

Resources for Developers

Kubernetes come with its own set of tools for developers. There are also a lot of third-party tools now that make working with Kubernetes immensely popular—we’ve compiled a list here. You can even run Dashboard if you prefer a GUI for managing Kubernetes.

Let’s not forget that there is still Helm Charts too. What is it and how can it be used? Let’s save the discussion for another article, shall we?

Kubernetes microservice pods cluster Web Service Object (computer science) app Load balancing (computing) Web server

Published at DZone with permission of Juan Ignacio Giro. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Why You Should Automate Code Reviews
  • Spring Cloud: How To Deal With Microservice Configuration (Part 1)
  • Using the PostgreSQL Pager With MariaDB Xpand
  • Top Authentication Trends to Watch Out for in 2023

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: