Over a million developers have joined DZone.

Load Balancing—The Missing Piece of the Container World (Part 2)

Learn about how a service, a group of containers, is the basic building block of load balancing, and the missing role of load balancing with containers.

· Cloud Zone

Download this eBook outlining the critical components of success for SaaS companies - and the new rules you need to play by.  Brought to you in partnership with NuoDB.

In my previous blog post, I described how easy it is to run a load balancer using tutum/haproxy image. However, the real world use case requires more controls on how the load balancer behaves. I am going to talk about some advanced topics in this article, but before starting, I would like to introduce the concept of a “service”, which serves as the basic build block of our load balancer tutum/haproxy.

Service vs Container

What is a Service?

A service is a group of containers that run with the same parameters on the same image. For example, if you run docker run -d tutum/hello-world 3 times, you could say the 3 containers created belong to the same service.

Why Service?

The concept of a service perfectly matches the function of a load balancer — a load balancer dispatches requests to the same application server, which is an application container in the docker world. For instance, if we link service A (containing 3 containers) and service B (containing 2 containers) to a load balancer, the load balancer will balance the traffic on 3 containers when accessing service A, and on 2 containers when accessing service B respectively.

How to Setup Services?

  1. Just as with tutum/haproxy, the basic building block of Tutum are services too. This means if you run your application using Tutum, the service of your application has been setup by Tutum natively.
  2. If you run tutum/haproxy outside of tutum, say using docker only, the link alias of your application container matters. Any link alias has the same prefix followed by “-” and an integer is considered from the same service. For instance, web-1 and web-2 are from service web, but web1 and web2 are from two different services web1 and web2.

Virtual Host and Virtual Path

Virtual Host

When you link multiple web application services to tutum/haproxy, you can specify an environment variable VIRTUAL_HOST in your web application services, so that when you access the load balancer with a different host name, you can still access different services. Here is an example:

docker run -d --name web1-1 -e VIRTUAL_HOST="www.example.com" <your_app_1>
docker run -d --name web1-2 -e VIRTUAL_HOST="www.example.com" <your_app_1>
docker run -d --name web2 -e VIRTUAL_HOST="app.example.com" <your_app_2>
docker run -d --link web1-1:web1-1 --link web1-2:web1-2 --link web2:web2 -p 80:80 tutum/haproxy

When you access http://www.example.com, tutum/haproxy takes you to your first application balancing on two instances, and when you access app.example.com, you are brought to your second web application.

Virtual Path

Apart from the domain name, you can also tell haproxy to select services based on the path of the url you are accessing. For example, if your application is set with -e 'VIRTUAL_HOST=*/static/, */static/*, all the urls whose path starts with static will go to that service. Similarly, if you specify -e 'VIRTUAL_HOST=*/*.php, all the requests to an url that ends with .php will be directed to your php application service.

For more information on the usage of VIRTUAL_HOST, please see Github: tutum/haproxy.

Affinity and Session Stickiness

There are three environment variables you can use to set affinity and session stickiness in your application services: BALANCE, APPSESSION and COOKIE:

  1. Set BALANCE=source. When it is set, HAProxy will hash the IP address of the visitor. It makes sure that the visitor with the same IP address can alway be dispatched to the same application container. It works for both tcp mode and http mode.
  2. Set APPSESSION=<appsession>. HAProxy uses the application session to determine which application container a visitor should be directed to. It works only for http mode. A possible value of <appsession> could be JSESSIONID len 52 timeout 3h.
  3. Set COOKIE=<cookie>. Similar to appsession, it uses cookies to determine which application container a visitor should connect to. A possible value of <cookie> could be SRV insert indirect nocache.

Check HAProxy:appsession and HAProxy:cookie for more information.

Multiple SSL Certs Termination

As mentioned in the previous article, you can activate SSL termination by simply adding SSL_CERT in tutum/haproxy. But in many cases, you may have multiple SSL certs bound with different domains. For example, you have cert A with common name prod.example.com and cert B with staging.example.com. What you expect is that when a user accesses prod.example.com, HAProxy terminates SSL with cert A, and SSL of staging.example.com is terminated by cert B. To achieve this, you only need to set two environment variables SSL_CERT and VIRTUAL_HOST settings on your application services:

docker run -d --name prod -e SSL_CERT="<cert_A>" -e VIRTUAL_HOST="https://prod.example.com" <prod_app>
docker run -d --name staging -e SSL_CERT="<cert_B>" -e VIRTUAL_HOST="https://staging.example.com" <staging_app>
docker run -d --link prod:prod --link staging:staging -p 443:443 tutum/haproxy

TCP Loading Balancing

tutum/haproxy runs in http mode by default, but it also has the ability to load balance TCP connections by using environment variables TCP_PORTS set in your application service. Below is an example:

docker run -d --name web -e VIRTUAL_HOST=www.example.com --expose 80 <web_app>
docker run -d --name git -e VIRTUAL_HOST="https://git.example.com" -e SSL_CERT="<cert>" -e TCP_PORTS=22 --expose 443 --expose 22 <git_app>
docker run -d --link web:web --link git:git -p 443:443 -p 22:22 -p 80:80 tutum/haproxy

In the example above, when you access http://www.example.com, you will visit your <web_app>; when you access https://git.example.com, you will go to <git_app> with SSL termination. In addition, port 22 is accessible by TCP connection.

tutum/haproxy also supports SSL termination on TCP. To enable it, instead of setting TCP_PORTS=22, simply set TCP_PORTS=22/ssl together with a SSL_CERT.

Summary

In the above sections, we introduced some basic examples of the advanced functions of tutum/haproxy. Using these functions in combination with one another can be very powerful.

Learn how moving from a traditional, on-premises delivery model to a cloud-based, software-as-a-service (SaaS) strategy is a high-stakes, bet-the-company game for independent software vendors. Brought to you in partnership with NuoDB.

Topics:
load balancer ,cloud ,containers

Published at DZone with permission of Bryan Lee, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}