Microservices Architecture: Introduction to Auto Scaling
In this article, we look at auto scaling and the important parts of implementing auto scaling in a microservices architecture.
Join the DZone community and get the full member experience.Join For Free
In this article, we focus our attention on dynamic scaling, aslo known as auto scaling, and why we need applications that can auto scale.
You Will Learn
- What auto or dynamic scaling is.
- Why dynamic scaling is important in a microservices context.
- How you can implement dynamic scaling in the coud.
Cloud and Microservices Terminology
This is the fifth article in a series of six articles on terminology used with cloud and microservices. The previous four can be found here:
- Microservices Architecture: What Is Service Discovery?
- Microservices Architecture: Centralized Configuration and Config Server
- Microservices Architecture: Introduction to API Gateways
- Microservices Architecture: THe Importance of Centralized Logging
The Load on Applications Varies
The load on your applications vary depending on time of the day, the day of the month or the month of the year.
Take for instance, amazon.com. It has very high loads during Thanksgiving, up to 20 times the normal load. However, during the major sports events such as the Super Bowl or a FIFA World Cup, the traffic could be considerably less - because every body is busy watching the event.
How can you setup infrastructure for applications to manage varying loads?
It is quite possible that the infrastructure needs to handle 10x the normal load.
If you have on-premise infrastructure, you need a large infrastructure in place to handle peak load.
During periods with less load, a lot of infrastructure would be sitting idle.
Cloud to the Rescue
That's where cloud comes into the picture. With cloud, you can request more resources when the load is high and give them back to the cloud when you have less load.
This is called Scale Out (create more instances as the load increases) and Scale In (reduces instances as the load goes down)
How do you build applications that are cloud enabled, i.e. applications that work well in the cloud?
That's where a microservices architecture comes into the picture.
Introducing Auto Scaling
Building your application using microservices enables you to increase the number of microservice instances during high load, and reduce them during times with less load.
Consider the following example of a CurrencyConversionService:
The CurrencyConversionService talks to the ForexService. The ForexService is concerned with calculating how many INR can result from 1 USD, or how many INR can result from 1 EUR.
The CurrencyConversionService takes a bag of currencies and amounts and produces the total amount in a currency of your choice. For example, it will tell the total worth in INR of 10 EUR and 25 USD.
The ForexService might also be consumed from a number of other microservices.
Scaling Infrastructure to Match Load
The load on the ForexService might be different from the load on the CurrencyConversionService. You might need to have a different number of instances of the CurrencyConversionService and ForexService. For example, there may be two instances of the CurrencyConversionService, and five instances of the ForexService:
At a later point in time, the load on the CurrencyConversionService could be low, needing just two instances. On the other hand, a much higher load on the ForexService could need 50 instances. The requests coming in from the two instances of CurrencyConversionService are distributed across the 50 instances of the ForexService.
That, in essence, is the requirement for auto scaling — a dynamically changing number of microservice instances, and evenly distributing the load across them.
Implementing Auto Scaling
There are a few important concepts involved in implementing auto scaling. The following sections discuss them in some detail.
Naming servers enable something called location transparency. Every microservice registers with the naming service. Any microservice that needs to talk to another microservice will ask the naming server for its location.
Whenever a new instance of CurrencyConversionService or ForexService comes up, it registers with the naming server.
When CurrencyConversionService wants to talk to ForexService, it asks the naming server for available instances.
Implementing Location Transparency
CurrencyConversionService knows that there are five instances of the ForexService.
How does it distribute the load among all these instances?
That's where a load balancer comes into the picture.
A popular client side load balancing framework is Ribbon.
Let's look at a diagram to understand whats happening:
As soon as any instance of CurrencyConversionService or ForexService comes up, it registers itself with the naming server. If the CCSInstance2 wants to know the URL of ForexService instances, it again talks to the naming server. The naming server responds with a list of all instances of the ForexService — FSInstance1 and FSinstance2 — and their corresponding URLs.
The Ribbon load balancer does a round-robin among the ForexService instances to balance out the load among the instances.
Ribbon offers wide variety of load balancing algorithms to choose from.
When to Increase and Decrease Microservices Instances
There is one question we did not really talk about.
How do we know when to increase or decrease the number of instances of a microservices?
That is where application monitoring and container (Docker) management (using Kubernetes) comes into the picture.
An application needs to be monitored to find out how much load it has. For this, the application has to expose metrics for us to track the load.
You can containerize each microservice using Docker and create an image.
Kubernetes has the capability to manage containers. Kubernetes can be configured to auto scale based on the load. Kubernetes can identify the application instances, monitor their loads, and automatically scale up and down.
In this article, we talked about auto scaling. We looked at important parts of implementing auto scaling — naming server, load balancer, containers (Docker), and container orchestration (Kubernetes).
Published at DZone with permission of Ranga Karanam, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.