{{ !articles[0].partner.isSponsoringArticle ? "Platinum" : "Portal" }} Partner

Clustering a Java Web Application With Amazon ElasticLoad Balancing

I will show you how to cluster a Java web application with the help of Elastic Load Balancing. Amazon lately introduced three great services:

  • Elastic Load Balancing: automatically distributes incoming application traffic across multiple Amazon EC2 instances.
  • Auto Scaling: automatically scale your Amazon EC2 capacity up or down according to conditions you define. 
  • CloudWatch: is a web service that provides monitoring for AWS cloud resources. Auto Scaling is enabled by CloudWatch.

Let's say you have a tremendous business idea like selling binoculars on the web. You wouldn't believe it (as I also didn't) but one of my college told me a story about his friend making a living by selling telescopes on the web. So you developed a simple web application, it's tested on your laptop, and you want to go public. First you don't want to invest too much into hardware and licenses so you just create a Amazon Web Services account, and start up just a small instance with jetty or Tomcat serving, for a reasonable 0.10 usd/hour fee. This is the way that AWS is selling like hot cakes: no upfront costs / pay as you go.

After a couple of weeks you realize, that binoculars sell like hot cakes, and your lonely instance can't serve all the request. You have 2 choices: either change to a bigger instance (a large instance costs 0.40 usd/hour), or you start up a second or  third small instance. Let's say you went on the second way (a large instance will be saturated soon, so you will need more instances anyway) and you face a couple of problems:

  • You need to distribute the load between the web servers.
  • In the peak hours you need 3 instances while during the night one server is sufficient to handle the request, you don't want babysit the application and start up and shut down instances regarding the load.

To distribute the load between several web servers, you'll need some sort of load balancer. Hardware load balancers are out of scope as they are quite expensive, and anyway, you decided to use Amazon's virtual environment. You could use round robin DNS (setting multiple ip address for the same DNS name), but then it gets tricky when you scale up or down: you have to refresh the DNS entries (A-records) and you have to choose a reasonable TTL (time to live) value which influences how quick your changes will propagate on the net.

Most probably you would go with the software load balancing approach, and you end up choosing Apache with mod_proxy_balancer. Then you face another decision: if you co-locate Apache with your Java web server, then you increase the load on the web server, you have the same problem with maintaining changing numbers, or changing ip (in case of a restart of apache) in the DNS entry. Or if you use dedicated instances for Apache, you almost double the costs: you pay 2-3 x 10 cents hourly for the web server instances, and 2x10 cents for Apaches (if you want to eliminate a single point of failure)

This is where you can introduce Elastic Load Balancing which costs just 2.5 cents/hour. (When I'm talking about costs, it's just a rough estimation, as I calculate only the box usage, and not the network traffic, but let's say the network traffic is about the same for the different scenarios I have described above.)

Required tools

 Creating an Elastic Load Balancer

To create an elastic load balancer you issue the following command:

# elb-create-lb binoculars-elb --availability-zones us-east-1a --listener "protocol=http, lb-port=80, instance-port=8080"DNS-NAME  binoculars-elb-825878936.us-east-1.elb.amazonaws.com

 The meaning of these parameters is:

  • the name of the load balancer: "binoculars-elb"
  • availability zone: "us-east-1a". it cloud be a list, in that case traffic would be distributed equally across them.
  • listeners: "protocol=http, lb-port=80, instance-port=8080"
  • protocol: "http". amazon supports either tcp (default) or http.
  • lb-port: 80. the loadbalancer will listen on this port.
  • instance-port: 8080. the instances running jetty are listening on this port.

The response of the command displays the DNS name of newly created load balancer.

Next we tell the load balancer, how to check that the instances registered with this load balancer (done in the next step) are ready to service. If the load balancer doesn't get a valid response in the defined timeout, it stops routing the request to the unhealthy instance.

# elb-configure-healthcheck binoculars-elb --interval 30 --unhealthy-threshold 2 --healthy-threshold 2 --timeout 3  --target "http:8080/d.txt" --headersHEALTH-CHECK  TARGET           INTERVAL  TIMEOUT  HEALTHY-THRESHOLD  UNHEALTHY-THRESHOLDHEALTH-CHECK  http:8080/d.txt  30        3        2                  2

 The meaning of the parameters are:

  • the name of the load balancer: "binoculars-elb"
  • interval: 30. The time spent (in seconds) between health checks of an individual instance. Should be greater than timeout.
  • unhealthy-threshold: 2. The number of consecutive health probe failures that move the instance to the unhealthy state.
  • healthy-threshold: 2. The number of consecutive health probe successes required before moving the instance to the Healthy state.
  • timeout: 3. Amount of time (in seconds) during which no response means a failed health probe.
  • target: "http:8080/d.txt". HTTP:port/PathToPing Any answer other than "200 OK" within the timeout period is considered unhealthy.

 The default sample web app deployed to jettys root context contains a small text file at the path "/d.txt"

 Starting up ec2 instances

Now we are starting up two ec2 instances, which are based on a prebuilt AMI, containing Java, and Jetty. The user file is a simple bash script to start up Jetty on port 8080:

# ec2run ami-a98c6dc0 -m -z us-east-1a -k [YOUR-KEY] -f start_jetty.sh -n 2RESERVATIONr-fb347392186376224412defaultINSTANCEi-f1635698ami-a98c6dc0pendingm1.small2009-07-06T21:16:07+0000us-east-1amonitoring-pendingINSTANCEi-f363569aami-a98c6dc0pendingm1.small2009-07-06T21:16:07+0000us-east-1amonitoring-pending

 First you wait a couple of minutes until the instances are up and running:

# ec2-describe-instances RESERVATIONr-fb347392186376224412defaultINSTANCEi-f1635698ami-a98c6dc0ec2-75-101-175-210.compute-1.amazonaws.comrunningepam0m1.small2009-07-06T21:16:07+0000us-east-1aINSTANCEi-f363569aami-a98c6dc0ec2-75-101-175-248.compute-1.amazonaws.comrunningepam1m1.small2009-07-06T21:16:07+0000us-east-1a

 You can check jettys by typing the public dns names and port 8080 into your browser:

  • http://ec2-75-101-175-210.compute-1.amazonaws.com:8080/
  • http://ec2-75-101-175-248.compute-1.amazonaws.com:8080/

 Once you got the default Jetty welcome page, you can register these two instances with your load balancer, and after a couple of minutes you can check the health of the balancer:

# elb-register-instances-with-lb  binoculars-elb --instances i-f1635698,i-f363569aINSTANCE-ID  i-f1635698INSTANCE-ID  i-f363569a# elb-describe-instance-health  binoculars-elbINSTANCE-ID  i-f1635698  InServiceINSTANCE-ID  i-f363569a  InService

 The start_jetty.sh script used as "user-data-file" at the instance start also generated a dead simple node.jsp, containing nothing else just the instance id of the actual ec2 instance. You can check how the load balancer distributes consecutive request:

# for i in {1..5}; do wget -qO -  "http://binoculars-elb-825878936.us-east-1.elb.amazonaws.com/node.jsp" ; donei-f1635698i-f363569ai-f1635698i-f363569ai-f1635698

 To simulate a failing web server, shut down the first instance:

# ec2-terminate-instances i-f1635698INSTANCEi-f1635698terminatedterminated

 Now the load balancer sends all the request to the second instance:

# for i in {1..5}; do wget -qO -  "http://binoculars-elb-825878936.us-east-1.elb.amazonaws.com/node.jsp" ; donei-f363569ai-f363569ai-f363569ai-f363569ai-f363569a

 You can also use the elb-describe-instance-health command to see weather the load balancer also realized the service outage and changed the state:

# elb-describe-instance-health  binoculars-elbINSTANCE-ID  i-f1635698  OutOfServiceINSTANCE-ID  i-f363569a  InService

 HTTP Session replication

There is one common issue you face when you cluster your web application: http session failover. With hardware load balancers (and Apache is also capable of it) you can use sticky sessions. Which means every request coming from the same browser (identified by the session id: either sent as a JSESSIONID cookie or encoded into the url) served by the same web server.

It's a fair solution in lots of cases, but what about if a customer is in the middle of a huge binocular order, and the web server dies? You don't want to tell the customer "we are sorry ... please start your shopping again" as you will probably lose that customer.

There are couple of solutions in the Java field, but you try to cut down on costs so you strip the commercial ones from the list: Tangosol, Terracotta, Weblogic, Gigaspaces, ... you name it. In the open source field you get:

  • Apache Tribes: started as part of Tomcat clustering, but refactored into it's own namespace, so it's available independent of Tomcat.
  • Wadi: developed by codehaus.org the makers of groovy and other goodies. It can use either Tribes or JGroups as a transport layer.

Most of the session replication solutions rely on IP multicasting, when it comes to dynamic discovery of the web servers sharing the session data. Unfortunately IP multicasting is not supported in the Amazon ec2 environment, but Tribes and JGroups both are able to handle it.

JGroups can use TCP gossiping (sending unicast messages for the initial member discovery).

Unfortunately the wadi-jgroups combination is not maintained, so if you want to use wadi, you have to use Tribes for the transport layer. Tribes is also able to work without IP multicasting, by defining StaticMembers

Jetty clustering with Wadi

So at the end of the day it doesn't matter weather your choose Tomcat or Jetty, you will configure Tribes. I show you the Jetty way of configuration. When you follow the description, you will get a wadi.xml file in JETTY_HOME/contexts. You just edit it and include an "addStaticMember" call in the middle of the cluster definition.

Please note, that every instance should only refer to other nodes, no reference to localhost is allowed.

<New id="wadiCluster" class="org.mortbay.jetty.servlet.wadi.WadiCluster">  <Arg>CLUSTER</Arg>  <Arg><SystemProperty name="node.name" default="red"/></Arg>  <Arg>http://localhost:<SystemProperty name="jetty.port" default="8080"/>/test</Arg>    <Set name="Port">4000</Set>  <!-- STATIC MEMBERS BEGIN -->  <Call name="addStaticMember">    <Arg>          <New class="org.apache.catalina.tribes.membership.StaticMember">              <Set name="Host">HOST_NAME_OR_IP</Set>              <Set name="Port">4000</Set>              <Set name="UniqueId">{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}</Set>          </New>    </Arg>  </Call>  <!-- STATIC MEMBERS END -->  <Call name="start"/></New>

So you would put ec2-75-101-175-248.compute-1.amazonaws.com instead of HOST_NAME_OR_IP on the first instance (i-f1635698) and put a reference to the first instance: ec2-75-101-175-248.compute-1.amazonaws.com on the second box (i-f363569a)

If you restart the two Jetty instances with these changes they will find each other, check in jetty.log:

=============================New Partition BalancingPartition Balancing    Size [24]    Partition[0] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[1] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[2] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[3] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[4] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[5] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[6] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[7] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[8] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[9] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[10] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[11] owned by [TribesPeer [i-f1635698; tcp://]]; version [1]    Partition[12] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]    Partition[13] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]    Partition[14] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]    Partition[15] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]    Partition[16] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]    Partition[17] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]    Partition[18] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]    Partition[19] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]    Partition[20] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]    Partition[21] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]    Partition[22] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]    Partition[23] owned by [TribesPeer [i-f363569a; tcp://]]; version [1]=============================

So when you store the shopping cart in the session, even if the elastic load balancer sends consecutive requests to a different jetty, you will se the same items in the cart, and you wouldn't mention if one of the jetty dies.


I know that the sample scenario is rather simplified:

  • it only uses a servlet container
  • no database was used  (you could define an other elastic load balancer with a tcp port 3306 in case of mysql)

But it points out how much of your headaches can be solved by amazon's Elastic Load Balancer for a fair fee.

I will cover Auto Scaling and CloudWatch in the next part of this tutorial.

{{ tag }}, {{tag}},

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}
{{ parent.authors[0].realName || parent.author}}

{{ parent.authors[0].tagline || parent.tagline }}

{{ parent.views }} ViewsClicks