Gatling Tool Review for Performance Tests (Written in Scala)
Federico Toledo presents his review of Gatling, a tool created in 2012 that has been gaining popularity in the past year and attention within the community.
Join the DZone community and get the full member experience.Join For Free
Have you heard of Gatling for performance tests? It’s a relatively new open-source tool (created in 2012, so pretty new), that has recently been gaining popularity (250,000 downloads in four years, 60,000 of those in the last three months, meaning it has been gaining attention from the community). So that you don’t have to dedicate too much time out of your day to learn more about this tool, I wrote this review to sum up some of its features and benefits. Hopefully, within just a few minutes, this Gatling tool review will give you a good idea of what you can do with it. As there are hardly any articles about the topic in Spanish, this a translation of my original post (written in español!).
Key features of Gatling:
Tool for performance testing
Free and open-source (developed in Java / Scala)
The scripting language is Scala, with its own DSL
It works with whichever operating system and any browser
It supports HTTP/S, JMS, and JDBC protocols
Colorful reports in HTML
It doesn’t allow you to distribute the load between machines, but it can execute its tests in different test clouds. It can scale using flood.io or Taurus with BlazeMeter (Taurus provides many facilities for continuous integration)
It’s a great tool for when:
You need to simulate less than 600 concurrent users. This is just a reference number, depending on how much processing your simulation script has, but if it needs to generate more, then you will have to pay for a tool in the cloud. A colleague told me that he managed to execute a script with 4,000 concurrent users with a simple script from just one machine.
You want to learn about performance tests (it’s very simple and the code is very legible)
You are interested in maintaining the test code (the language, Scala, and the Gatling’s DSL are pretty focused on facilitating the maintainability of the tests, which is ideal if you are focusing on continuous integration).
This tool allows you to carry out a load simulation of concurrent users against a system through the HTTP/S, JMS, or JDBC protocols. The most typical scenario of when you want to use this tool is to simulate users of a web system in order to analyze the bottlenecks and optimize it. For comparison, some very popular alternatives on the market are JMeter and HP LoadRunner (to name one open-source tool and one commercial, both are widely used).
Gatling is a free and open-source tool. It works on Java, thus it’s suitable for all operating systems. It requires the JDK8 (it’s not enough with the runtime, we need the development kit).
The tool has two executables: one to record the tests and the other to execute them. The tests are recorded in Scala, which is a very clean and easy to read language, even upon looking at it for the first time. After each execution, you get a colorful and wordy report.
Fundamental aspects for the correct simulation of users:
The scripts count on fundamental aspects for the correct simulation of users, which for our consideration are:
Handling of protocol (from the invocations and responses, to the management of headers, cookies, etc.)
Handling of strings, facilities to parse, regular expressions, and including, localization of elements for xpath, json path, css, and more
Validations, being that we need to check that the responses are correct
Parametrization from different sources of data (here I see a very strong point of this tool, since it offers various, easy alternatives to use)
Handling of dynamic variables, known as variable correlation
Handling of different scopes of the variables (level of threads, tests, etc.)
Modularization (facilitating the maintainability and legibility of the scripts)
Handling waits (to simulate think times)
Metrics management (response times, individual ones and group ones, transactions per second, amount of concurrent users, errors, amount of transferred data, etc)
Management of errors and exceptions
Flow control (loops, if-then-else)
What other things do you consider in the moment of evaluating the scripting language of a load or stress simulation tool?
Regarding the reports, they are very colorful and complete. Here I’d like to highlight that its reports:
Are in HTML with easy navigation, with an index and organized
Graphically show the information in a well grouped and very well processed and related way
Include a graphic of the quantity of virtual users during the test
You can zoom in on the graphics to focus and analyze them with more detail in certain areas
Graph the requests per second and the responses per second, including the comparison of the quantity of active users
You can see each request in detail, in order to refine your analysis
Separate the response times for the ones that were “ok” and the ones that failed
Handle of the concept of percentiles
Have a log of errors found
What other things do you deem important when evaluating the reports of a stress or load simulation tool?
In short, we at Abstracta are big fans of Gatling. We have used it several times in the past and we are now receiving several requests from clients to use it. In the future, I am sure that it will continue to be an important item in our continuous integration toolshed.
Have you used Gatling? How does it measure up for you?
Published at DZone with permission of Federico Toledo, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.