Understanding Azure Load Balancing Solutions
Let's learn about the various load balancing solutions available in Azure to see which is the best for your application or project.
Join the DZone community and get the full member experience.Join For Free
in this article, we will look at the various load balancing solutions available in azure and which one should be used in which scenario.
load balancer are the essential components when it comes to creating high available web applications. we have all used load balancers with traditional on premise servers where our application is running on n instances and a load balancer is sitting in front of these servers and distributing load to them based on some predefined algorithm and connection affinity settings.
moving to the cloud, we need to understand how we can achieve the same load balancing using azure components. load balancing in cloud applications require much more thought that having a simple load balancer in front of some servers as we could have services hosted on paas, we could have services running on separate instances for separate tenants and we could also have applications running on multiple servers that are geographically distributed across the world.
for this reason itself, there are multiple components available in azure for load balancing. each of these components have a different purpose and we need to choose the right component for the scenario to be able to achieve the optimal application architecture.
azure load balancing solutions
there are mainly 3 load balancing components available in azure.
- azure load balancer
- azure application gateway
- azure traffic manager
let's start understanding each of these components one by one and try to understand when to use each component effectively.
azure load balancer
azure load balancer is a load balancer in a more classical sense as it can be used balancing load for vms in the same way we were using traditional load balancers with our on-premise servers. now since azure load balancer is designed for cloud applications it can also be used to balance load to containers and paas applications along with vms.
but this similarity with traditional load balancers ends here only mainly because the azure load balancer actually works are transport layer (layer 4 of osi model). which would mean that it will distribute the network traffic in the same azure data center but will not be able to use the features that tradition load balancers provided at session and application layer since these are the layer 7 constructs of the osi model.
the load balancer is configured with load balancing rules and it these rules work at the port level. it accepts source and destination ports map them together so that whenever it receives a request for the source port, the request is forwarded to a virtual machine from a group of virtual machines(or application in vnet) attached to the load balancer on the destination port.
azure load balancer can be used in two configuration modes:
- external — public load balancing
- internal — internal load balancing
external — public load balancing
in this mode, load balancer is assigned a public ip address to ensure that the load balancer can accept requests coming in from the internet. the load balancer will get called from the internet by the client applications and services, and then based on the configured rules it will distribute the incoming traffic over vms, containers, or apps.
internal — internal load balancing
the internal load balancer is essentially the same as external, but it uses a private ip address and thus it can be called only from the applications within the virtual network to which it is attached.
azure load balancer help us design high availability at the infrastructure level however since there are scenarios where more advanced features and services are required from our load balancing component like connection affinity, security, ssl termination etc we cannot use azure load balancers are to achieve these advanced features we need a solution that could handle layer 7 constructs of osi model i.e. application, session etc. let's look at how we can achieve this in the next section.
azure application gateway
azure application gateway is a level 7 load balancer and thus it has access to application and session payload which makes it possible for the application gateway to provide much more feature-packed load balancing like sticky sessions, connection affinity, etc. since application gateways have more information compared to the azure load balancer, more complex routing and load balancing can be configured. application gateway acts as a reverse proxy service. it terminates the client connection and forwards request to back endpoints.
in my opinion, if we are working at an application level where the load balancer should be available publically, there are more use cases in which application gateway makes much more sense to use rather than load balancer.
application gateway can be thought of (load balancer ++) that is running on layer 7 and provides more features than load balancer. application gateway can also be used to route traffic based on url which is very useful when it comes to developing multi-tenant applications where each tenant had separate instances of vms running and the tenant identifier comer in the url.
azure traffic manager
so far, we have seen the load balancing solutions that cater to the load balancing within a data center. load balancers and application gateways are the components that are to be used to achieve high availability within a data center. but with the cloud, we can also architect our applications in such a way that they are geographically distributed. how will we balance the load across geographies then?
azure traffic manager is exactly for this purpose only. azure traffic manager uses dns to redirect requests to an appropriate geographical location endpoint. traffic manager does not see the traffic passing between the client and the service. it simply redirects the request based on most appropriate endpoints. geographical location endpoints are internet facing reachable public urls.
azure traffic manager works at the dns level, i.e. it distributes the load over multiple regions and data centers using the rules configured at the dns level. the client makes a dns request and, based on the location of the dns, azure traffic manager will find the nearest region and sends that back to the client via a dns response.
designing highly available apps
when we are designing large scale applications that are highly available, we need to use all these components together. below image shows a scenario where the application is geographically distributed and is using to choose the nearest region. then it is using to choose which application server to fetch response from. and finally using a for infrastructure level load balancing for database servers.
point of interest
in this article, we looked at how various azure load balancing solutions are designed for a specific purpose in mind and using the right one for the right scenario (or a combination of them) can help us in creating highly available applications.
Published at DZone with permission of Rahul Rajat Singh, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.