DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Tips for Managing Multi-Cluster Kubernetes Deployment With High Efficiencies
  • Adding a Custom Domain and SSL to AWS EC2
  • Importance Of Anypoint Dedicated Load Balancer in MuleSoft Ecosystem
  • Rails 6: Multiple DB Support

Trending

  • Scalability 101: How to Build, Measure, and Improve It
  • The Role of Artificial Intelligence in Climate Change Mitigation
  • Mastering React App Configuration With Webpack
  • Streamlining Event Data in Event-Driven Ansible
  1. DZone
  2. Software Design and Architecture
  3. Performance
  4. Testing the NGINX Load Balancing Efficiency with ApacheBench

Testing the NGINX Load Balancing Efficiency with ApacheBench

By 
Tetiana Markova user avatar
Tetiana Markova
·
May. 01, 15 · Interview
Likes (1)
Comment
Save
Tweet
Share
5.7K Views

Join the DZone community and get the full member experience.

Join For Free

providing numerous prominent features and possibilities, jelastic allows you to host applications of any complexity and in such a way, gives your customers exactly what they need. however, when your project becomes highly demanded and visited, you face another problem – the necessity to increase your hardware productivity, as it should be able to handle and rapidly serve all of the incoming users’ requests.

adding more resources will temporarily improve the situation, saving your server from the failure, but it won’t solve the root issue. and this results in the need to set up a clustering solution with embedded automatic load balancing.

application cluster adjusting is quite easy with jelastic – just add a few more application server instances to your environment via the topology wizard . in addition, you’ll automatically get the nginx-balancer server enabled in front of your project. it will be responsible for the even load distribution among the stated number of app server nodes, performed by virtue of the http load balancing . server-diagram_2 in such a way, your application performance grows significantly, increasing the number of requests that can be served at one time. as a nice bonus, you decrease the risks of app inaccessibility, since if one server fails, all the rest continue working.

in order to prove this scheme is that efficient, we’ll show you how to perform the load balancing testing with the help of apachebench (ab) tool. it provides a number of possibilities for testing the servers’ ability to cope with the increasing and changeable load. though ab was designed for apache installations testing, it can be used to benchmark any http server.

so, let’s get started and test it in real time.

create an environment and deploy the application

1. log into the jelastic platform and click the create environment button in the upper left corner of the dashboard.

create env

2. the environment topology dialog window will instantly appear. here you can choose the desired programming language, application/web server and database.

as we are going to test the apachephp server loading, select it and specify the resource usage limits by means of cloudlet sliders. then, attach the public ip address for this server and type the name of a new environment (e.g. balancer ). click create.

env wizard

3. in just a minute your environment will appear at the dashboard.

env created

4. once the environment is successfully created, you can deploy your application to it. here we’ll use the default helloworld.zip package, so you just need to deploy it to the desired environment with the corresponding button and confirm the deployment in the opened frame.

deploy to

control point testing

to analyze the results you’ll need something to compare them with, so let’s make a control point test, using the created environment with just a single application server node.

as it was mentioned above, we’ll use the apachebench (ab) tool for these purposes. it can generate a single-threaded load by sending the stated number of concurrent requests to a server.

follow the steps below.

1. apachebench is a part of standard apache source distribution, so if you still don’t have it, run the following command through your terminal (or skip this step if you do).

apt-get install apache2-utils
detailed information about all the further used ab commands can be found by following this link .

2. enter the next line in the terminal:

ab -n 500  -c 10 -g res1.tsv {url_to_your_env}

substitute the {url_to_your_env} part with a link to your environment (e.g. http://balancer.jelastic.com/ in our case). in order to get it, click the open in browser button next to your environment and copy the corresponding url from the browser’s address bar.

open in browser

the specified command will send the total amount of 500 requests to the stated environment, which are divided into the packs of 10 concurrent requests at one time. all the results will be stored in the res1.tsv file inside your home folder (or enter the full path to the desired directory if you would like to change the file location).

also, you can specify your custom parameters for the abovementioned command if you want.

t1-2

this test may take some time depending on the parameters you’ve set, therefore be patient.

3. the created file with results should look like the image below:

res1

change the environment configuration

once you’ve got the initial information regarding application performance, it’s time to extend your environment’s topology and adjust it for the further testing.

1. return to the jelastic dashboard and click change environment topology for your balancer environment.

change topology

2. within the opened environment topology frame, add more application servers (e.g. one more apache instance) – use the + button in the horizontal scaling wizard section for that.

nginx-balancer node will be automatically added to your environment as an entry point of your application. enable public ip for your load balancer and state the resource limits. clickapply to proceed.

env topology

3. when all of the required changes are successfully applied, you should disable the sticky sessions for the balancer server. otherwise, all the requests from one ip address will be redirected to the same instance of the application server.

therefore, click the config button next to the nginx node.

config

4. navigate to the conf > nginx-jelastic.conf file. it’s not editable, so copy all its content and paste it to the nginx.conf file (located in the same folder) instead of include /etc/nginx/nginx-jelastic.conf; line (circled at the following image).

nginxconf

5. then, find two mentions of the sticky path parameter in the code (in the default upstream and upstreams list sections) and comment them as it is shown below.

nginxconf 2-2

note: don’t miss the closing curly braces after those sticky path strings, they should be uncommented.

6. save the changes applied and restart the nginx server.

restart node

testing balancer and compare results

now let’s proceed directly to load balancing testing.

1. switch back to your terminal and run the ab testing again with the same parameters (except the file with results – specify another name for it, e.g. res2.tsv ).

ab -n 500  -c 10 -g res2.tsv {url_to_your_env}

image13

2. in order to clarify the obtained results, we’ll use the freely distributed gnuplot graphs utility. install it (if you haven’t done this before) and enter its shell with a gnuplot command.

gnuplot

3. after that, you need to set up the parameters for our future graph:

set size 1, 1
set title “benchmark testing”
set key left top
set grid y
set xlabel ‘requests’
set ylabel “response time (ms)”
set datafile separator ‘\t’

set gnuplot

4. now you’re ready to compose the graph:

plot “/home/res1.tsv” every ::2 using 5 title ‘single server’ with lines, “/home/res2.tsv” every ::2 using 5 title ‘two servers with lb’ with lines

this plot command will build 2 graphs (separated with comma in the command body). let’s consider the used parameters in more details:

  • “/home/resn.tsv” represents paths to the files with your testing results
  • every ::2 operator defines that gnuplot will start building from the second row (i.e. the first row with headings will be skipped)
  • using 5 means that the fifth ttime column (the total response time) will be used for graph building
  • title ‘n’ option sets the particular graph name for the easier separation of the test results
  • with lines is used for our graph to be a solid line

plot 2

you’ll get an automatically created and opened image similar to the following:

gnuplot10-2 2

due to the specified options, the red graph shows the performance of a single apacheserver without balancer (control point testing results) and the green one – of two servers with nginx load balancer (the second testing phase results).

note: that the received testing results (response time for each sent requests) are shown in the ascending order, i.e. not chronologically.

as you can see, while serving the low load, both configurations’ performance is almost the same, but as the number of requests is increasing, the response time for an environment with a single app server instance grows significantly, resulting in serving less requests simultaneously. so, if you are expecting a high load for your application server, increasing the number of its instances in a bundle with a balancing server will be the best way to keep your customers happy.

register now and try it out for yourself. enjoy all of the advantages of the jelastic cloud!

Load balancing (computing) application Requests Efficiency (statistics)

Published at DZone with permission of Tetiana Markova, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Tips for Managing Multi-Cluster Kubernetes Deployment With High Efficiencies
  • Adding a Custom Domain and SSL to AWS EC2
  • Importance Of Anypoint Dedicated Load Balancer in MuleSoft Ecosystem
  • Rails 6: Multiple DB Support

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!