Avoid Clustering of Microservice Instances
A look at deploying Microservices without overlap! Details which applications to use and which methods to utilize.
Join the DZone community and get the full member experience.
Join For FreeYou may also like: Clustering for Everyday Life
We finally have really great tools simplifying the deployment of our applications and using which we are able to deploy our microservices quickly and scale it to a number of instances we need to handle our application load. This was a great move forward and adaptation of those tools is growing every year.
With this new deployment model, we also need to change our applications and ways how we design them to be able to profit from it as much as it's possible. Unfortunately, a lot of applications are designed with old approaches and becoming rather a burden at the time of deployment of a new version. Let's go through one real example I came across.
What Are the Advantages of Application Schedulers
How we mentioned above, application schedulers (Mesos/Marathon, Nomad, Kubernetes, ..) significantly changed the thinking about application development. We need to react to it and change our application design.
What Are the Main Advantages If We Move Our Applications to the Schedulers?
- Very quick deploy of a new application, an upgrade of the application, ability to quickly increase the number of instances to withstand the higher load.
- Resource cost reduction - we have a way how to use our resources/servers efficiently and use their full potential
- We have a backbone/framework that keeps our deployments as similar as possible
How the Design of Our Applications Changed
First, what I want to say is, schedulers and microservices is not always a great solution for your system. We always need to consider the overhead of maintaining plenty of services and the infrastructure of the scheduler itself. Very often, it's just simpler to write one application and deploy it on a virtual machine.
However, if we find that the microservices design is a suitable design for our system, then we very likely want to go with some scheduler as well.
What Design Possibilities Do the Schedulers Offer to Our Applications?
- We can write small applications dedicated to only one purpose. Using this principle we are able to reason about the application much easier and see consequences of future changes much quicker.
- We can follow a functional design to have only one Input to our application and Output from our application. For example, we read from an external queue (RabbitMQ) and write into DB or send a message to another microservices.
- We are not forced to combined multiple programming models in one application that was very often for monolith applications. By programming models, I mean having REST API for handling messages in real-time, batching by reading from external queues and having scheduled jobs doing some cleaning. The combining of those models makes our applications much complex and hard to reason about.
How to Easily Lose All the Advantages
We discussed the advantages of microservices and schedulers so far but there are also pitfalls we need to take into account. One of them is that the design and architecture of applications are more difficult than in the case of a monolith. Let's show one example of how to easily screw it up.
Clustering the Instances of Your Microservice
How we mentioned above, very likely we want to write as small applications as possible and move the complexity to the infrastructure; however, sometimes our business logic needs some cooperation between our instances, for example:
- We need to ensure that only one message of a certain customer can be in the process. One instance pulls the message from an external queue and needs to notify the other instances.
- We need to be able to create a distributed lock to protect some critical path across multiple instances.
- We need to keep data as a distributed cache to ensure that one instance sees that the previous dependent message has been already processed by a different instance.
We introduced in-memory Hazelcast that creates a cluster at a startup and provides the way how to communicate through all the instances.
There Are a Lot of Pitfalls That Correspond to the Clustering Across Instances
- A slower startup, every starting instance need to connect to a cluster
- Application complexity, a lot of threads in the application need to handle all features/algorithms of distributed processing: health checking, consensus, replication, and so on.
- Coordination of the Hazelcast cluster with the rest of the application. E.g. if you read from an external queue and start stopping the instance, you can encounter problems because your messages are still hitting the already closed node of Hazelcast.
- Stopping the instances takes more time because the Hazelcast node needs to replicate data.
- It's harder to reason about application resources: more threads that don't belong to business logic but consume CPU, data structures that consume Memory.
How to Solve It Using Tools for Distributed Processing
The biggest problem is not the application itself, it just needs to reflect the environment in which it lives.
Consume One Message at the Same Time per Customer Across All Instances
To solve this problem, we need to consider the whole distributed environment. Why RabbitMQ is not the right tool for this case? RabbitMQ is a really great tool for independent messages and is able to scale very easily. We just need to provide more threads as consumers in one application. However, it is not the right tool for dependent messages and for keeping the ordering of messages because we are not able to force that we want to process all messages belonging to one customer only by one instance.
Use Apache Kafka for Dependent Messages and Ensuring the Ordering
Kafka is able to keep the ordering of messages in a single partition. We just need to ensure that all messages belonging to one customer will end up in one partition. After that, we can remove all the code handling the coordination between instances and we have the application only with business logic again.
Kafka Streams
Using Kafka Streams we can handle a couple of major use-cases on how to avoid clustering in our microservices.
KStream: solves the problem of processing dependent messages and ordering. All messages belonging to one customer are consumed in one instance.
KTable: solves the problem of keeping/caching data from the previous dependent messages. We can cache data belonging to one customer in one instance. We don't need the distributed cache to replicate customer's data to all instances.
Summary
The way to clean applications sometimes requires broader changes in your distributed environment and choosing the right tools for your use-cases. It's always up to you to figure out whether it's worthwhile to introduce new tools requiring additional infrastructure overhead or tackle the problems in your code.
Thank you for reading my article and please leave comments below. If you would like to be notified about new posts, then start following me on Twitter!
Further Reading
10 Interesting Use Cases for the K-Means Algorithm
Density-Based Clustering and Identifying Arbitrarily Shaped Distributions Using R
Opinions expressed by DZone contributors are their own.
Trending
-
Chaining API Requests With API Gateway
-
5 Key Concepts for MQTT Broker in Sparkplug Specification
-
Managing Data Residency, the Demo
-
Why You Should Consider Using React Router V6: An Overview of Changes
Comments