Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Performance Testing Using JMeter Under Different Concurrency Levels

DZone's Guide to

Performance Testing Using JMeter Under Different Concurrency Levels

When we conduct performance tests, we study the systems under varying conditions. What happens when you have to analyze the performance under different concurrency levels?

Free Resource

Discover 50 of the latest mobile performance statistics with the Ultimate Guide to Digital Experience Monitoring, brought to you in partnership with Catchpoint.

When we conduct performance tests, we often have to study the behavior of systems under varying workload conditions (i.e., varying arrival rates, varying concurrency levels, varying inter-arrival times). In this article, we consider the case where we have to analyze the performance under different concurrency levels (note that concurrency represents the number of concurrent users accessing the system).

We used Apache JMeter as the load testing client. JMeter is a popular performance testing tool that allows us to test and analyze the performance of systems under a wide range of conditions. In this article, we will discuss some behaviors that we noticed (in the latency) when we use different methods to control the concurrency.

Setup

We configured JMeter (Version 3.0) to load test a web service deployed on a Tomcat (Version 8.5.6) server. Here, the web service is a simple echo service that sends the information it receives back to the client. JMeter and Tomcat server were deployed on two different machines with the following specifications:


JMETER Machine

Tomcat Machine

Processor

Intel Core i3-2350M CPU @ 2.30 GHz x4

Intel Core i7-3520M CPU @ 2.90 GHz x4

Cores

4

4

Clock Speed(MHz)

2300

3600

RAM

8 GB

8 GB

OS Type

64-bit

64-bit

The concurrency is controlled in the following manner:

Thread Name

Concurrency

Time

T1

50

60

T2

25

60

T3

50

60

T4

25

60

Different Methods to Control the Concurrency

Let us now consider different methods that we can use to vary and control the concurrency in JMeter.

1. Having Multiple Thread Groups in a Single Test Plan

One possibility is to have dedicated thread groups for each concurrency (and run each thread group one after the other). 

Screenshot from 2016-12-14 16-33-07.png

One of the drawbacks of this method is having to duplicate the HTTP request sampler for each thread group. This method also allows you to obtain different JTL files for the thread groups if the listeners are added. Since we have separate JTL files (note: JTL is a CSV file that contains performance-related data) we can compute the performance numbers for each thread group separately.

However, there are a few issues in this approach. When we analyzed the latencies of each thread group, we noticed that the initial latency values are relatively high compared to latencies in the later parts of the thread group. We also noted these initial latencies exhibit high variability. This is illustrated in the following figure.

Image title

Let's now discuss the reason for this behavior using T1 (thread group one) and T2 (thread group two). When T2 starts, it has to create a new set of threads from scratch. This will take some time (even if we set the ramp-up period to a very small value). In addition to this, T2 has to wait until all threads in T1 complete. Moreover, when analyzing the GC logs of JMeter, we saw garbage collection happening in the JMeter at the beginning of each thread group. Because of these behaviors, the latency profile of T2 becomes somewhat distorted at the beginning. We observed similar behavior in T3 and T4 as well. 

2. Parameterizing the Thread Group Configuration


Screenshot from 2016-12-15 09-55-33.png

In this method (as shown above), you only have one thread group in the test plan. The main advantage of this method is avoiding the duplicate configuration of the HTTP request samplers. In this approach, the same test plan is executed multiple times from the terminal by specifying the concurrency as a parameter. This method also allows you to specify the JTL for each concurrency in the terminal, thus details regarding each thread group can be accumulated separately. JMeter configuration for this is shown below.

Screenshot from 2016-12-15 09-57-15.png

To run the script:

Screenshot from 2016-12-15 09-59-17.png

If you do not want to type the JMeter terminal command multiple times for each concurrency, you can create a small script with the commands and run the script.

The results obtained with this method are very similar to that of the first method. When we analyzed the results of the performance test, we noted that the initial latencies were relatively high and they exhibit higher variability compared to later. We have discussed the reason for this behavior earlier in this article. The behavior of latency is illustrated in the following figure.

Image title3. Using Ultimate Thread Group

Ultimate thread group is a JMeter plugin that allows you to control concurrency with time. This also allows you to control the initial delay, startup time, and shutdown time of the threads.

Screenshot from 2016-12-15 10-35-58.png

Ultimate thread group also provides a graphical representation (see below) showing how the concurrency varies over time. 

Screenshot from 2016-12-15 11-18-21.png

If we use an ultimate thread group, we can get a smooth transition of the latencies between one thread-group and the other by controlling ramp-up period and ramp-down periods. This is illustrated in the following figure.

 Image title

Conclusion

In this article, we discussed different methods one may use to change the concurrency when doing performance tests. We noted that the use of multiple thread groups (method 1 and 2 described in the article) can lead to inconsistencies in the latency values particularly at the transition point. If we use ultimate thread group, we can address this problem by controlling the ramp-up and ramp-down periods. This leads to a smooth transition of latencies between two thread groups.

Is your APM strategy broken? This ebook explores the latest in Gartner research to help you learn how to close the end-user experience gap in APM, brought to you in partnership with Catchpoint.

Topics:
performance ,jmeter ,latency ,concurrency

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}