Tools to Help Manage Microservices

DZone 's Guide to

Tools to Help Manage Microservices

A dev discusses the many issues that can arise when switching from monolithic to microservices-based architectures and some of the tools out there to help.

· Microservices Zone ·
Free Resource

By 2018, microservices architecture, a variant of service-oriented architecture, had made itself the leading choice for developing any enterprise application. It’s not a jargon anymore; in fact, it’s today’s reality for many products. We are very well aware of the driving forces behind the microservices architecture. Here, I am referring to the different driving forces such as tremendous agility, team autonomy, having a database per service, and so on.

I have designed microservices architectures from scratch and migrated enormous monoliths to microservices. In the last few years, I have witnessed the journey of a microservice technology stack. I have evaluated Netflix OSS, Spring Cloud, and Java’s native microservices stacks. However, this world is dominated by Kubernetes. In reality, Kubernetes was not conceptualized for microservices. However, the field has evolved in such a way that the Kubernetes ecosystem provides a comprehensive platform for microservices. Over the past few years, a state of the art has been established for microservices. This state of the art of governed by Kubernetes, Docker, Kafka, REST + JSON, GRPC + Protocol buffer, and the cherry on the cake is Golang.

We are in the age of infrastructure as a code, a step towards making the infrastructure fully autonomous. So we have Docker for packaging the application. When you have hundreds of microservices or containers you require orchestration engines such as K8s. You are just required to provide the desired state to Kubernetes and it guarantees to manage the rest for you. Then comes Kafka to manage the asynchronous communication between the microservices. In many products, there is a clear cut boundary between core APIs and product APIs. A combination of GRPC plus protocol buffer is often used for core APIs and a defacto standard of REST plus JSON is often used for product APIs.

Even though this microservices stack has become the norm now, something is missing from the scene. When you migrate from monoliths to microservices, things do not remain the same. In a microservices world, the organization and communication structures of your team greatly influence the design of your systems. When you actually start implementing these architectures, very soon you find that you are knee-deep in distributed systems. Distributed systems are fancy and cool; however, they are terrible. When you have hundreds of microservices you have a spider web of connections. The thought process of designing monoliths and microservices is not the same. There is no typical n-tier system anymore and that’s why one has to prepare in advance for addressing the concerns that arise in a microservices architecture. These cross-cutting concerns directly affect the NFRs of your system. So, what are these concerns?

You need a place to store the configuration files and secrets, i.e. encrypted files. That’s K8s config map and secret. When you have different nodes hosting multiple instances of individual microservices you require service discovery. This is done by the service and ingress offered by K8s. After service discovery, then comes load balancing, which is also done by K8s service and ingress. One of the crucial aspects of the microservices world is tracing that can provide a holistic view of a system. Centralized logging helps you to store the logs in a central place. In my scheme of things, it’s done by EFK, i.e. Elastic, FluentD, and Kibana. Centralized logging works in tandem with distributed tracing. In the microservices world, as a specific API request can be handled by tens or hundreds of microservices, you need to have some common context in the logs generated by individual microservices.

Another vital aspect is collecting the metrics and monitoring the state of your microservices and Kubernetes cluster. This is done by a trio of Heapster, Prometheus, and Grafana. Then comes resiliency and fault tolerance. In today’s era, this has to be at the core of your system architecture. In K8s, it’s done by Kubernetes services and health checkers.

Finally, you have to deal with scaling your microservices, i.e. your pods. For that, you have HPA — horizontal pod autoscaler — which is used to scale your microservice instances based on different policies. On a few occasions, you may need to scale out beyond the K8s cluster itself and for that, there is CA – Cluster autoscaler. Now, I am not saying that these cross-cutting concerns are not required in the monolith. However, the attention that you have to pay to them in the case of a monolith is much less than in the case of microservices. You really have to take these factors into consideration before starting the design for your microservices architecture.

So, we have figured out a set of components that address the cross-cutting concerns. The big question is how to install these diverse components. With Helm charts, you can automate the installation, but then what about the maintenance and necessary upgrades of these components? Management is a burden. On top of these issues, what about changing the component that addresses a specific microservice's concern? For instance, you want to replace the EFK stack with some other tool or Grafana with something else. If you decide to own these components then you will have to manage everything.

Well, the problems are not over. There are many other things to solve.

Request routing, service observability, rate limiting (incoming and outgoing), authentication, failure management and fault injection, circuit breakers, rolling upgrades, telemetry. With this context, we are now aware of the problem. What is the solution? It’s unimaginable to change the code or configuration for these issues, which is why externalizing comes into the picture.

People started with ZooKeeper and it did solve the problems to a certain extent. Then came the side car pattern, which did not provide a complete solution. Then Ambassador arrived on the scene, which actually solved many of the issues. However, it was still not a comprehensive solution. So, is there any single solution that fits all these issues? Yes, we now have service mesh, which addresses everything. I have evaluated different service meshes such as Istio, Consul, Conduit, Linkerd, and finally, I settled with Istio.

Istio was launched in 2017 by Google. In 2018 it became pretty stable. Along with Google, IBM, Lyft, Pivotal, Cisco, and Red Hat are also involved. Let’s understand how Istio works, in a nutshell.

Istio has two planes, a control plane and a data plane. The data plane is composed of a set of intelligent proxies (Envoy) deployed as sidecars. These proxies mediate and control all network communication between microservices along with Mixer, a general-purpose policy and telemetry hub.

The control plane manages and configures the proxies to route traffic. Additionally, the control plane configures Mixers to enforce policies and collect telemetry.

Citadel is used for authentication, Pilot provides service discovery for the Envoy side cars and traffic management capabilities for intelligent routing and resiliency.

Envoy is an intelligent proxy that acts as a side car and it facilitates:

  • Dynamic service discovery
  • Load balancing
  • TLS termination
  • HTTP/2 and gRPC proxies
  • Circuit breakers
  • Health checks
  • Staged rollouts with %-based traffic split
  • Fault injection
  • Rich metrics

I do not believe in just theoretical knowledge. So, here is the demo on how to use service mesh – Istio that addresses all cross cutting concerns.

In a recent dev conference, I spoke about the same. You can find my presentation on YouTube, the and Istio service mesh demo is on YouTube as well.

Finally, I would like to emphasize that the microservices architecture journey is not as easy as one may think. Many organizations start the journey; however, they encounter the different issues that I highlighted and their lack of architecture planning costs them a lot. Many settle with a mix of monolith and microservices as a workaround with an anti-corruption layer design pattern in place. My state of the art completes with Istio and that’s the journey towards awesomeness when it comes to microservices architectures.

microservices ,kubernetes ,istio ,service mesh ,golang ,microservices architecture ,monolithic architecture

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}