Continuous integration (CI), test automation, and “shifting left” are becoming the standard for DevOps, developers, and QA engineers. But despite the importance of performance and the understanding that systems are complex and that it can be challenging to identify and fix bottlenecks in a short time, load testing is still not an integral part of the CI workflow.
Jenkins easily enables users to integrate load testing into its workflow. By using Jenkins as part of the CI process and to trigger jobs by commits, users are taking advantage of automation and process speed-up capabilities.
Advantages of Integrating JMeter into Jenkins
JMeter is one of the most popular open-source load testing systems. By integrating JMeter into Jenkins, users can enjoy:
- Unattended test executions right after software build and deploy.
- Automatic build failures in case of performance degradation.
- Easy access to test reports that show application metric trends - all tests are in one place and available to anybody with the correct permissions.
- Automated routine work of test configuration, execution, and baseline results analysis. Users’ hands and minds are free for more important, complex and interesting tasks.
How to Integrate JMeter into Jenkins
- Store JMeter results as XML (recommended because they are easier to use) or CSV.
- Specify the command to run your test in the Execute Shell Batch Command section.
- Check the Console Output tab to verify the execution was successful.
- Find your files in the project’s workspace.
- Specify Build Parameters.
How to Use the Performance Plugin
To view JMeter reports on Jenkins, you can use the Performance plugin.
- Install the Performance plugin.
- Configure the plugin.
The Performance plugin can be added as a “Post-build Action”. When the JMeter test is finished, the plugin will:
- Collect the data.
- Conditionally fail the build if the error threshold is exceeded.
- Build or update the performance trend chart for the project.
Configuration options explained:
- Performance report: For JMeter, you will need to upload a file in XML format.
- Select mode: The choices are in the Relative Threshold and Error Threshold. The Relative Threshold compares the difference from the previous test, and if it exceeds the defined value the build will be marked as failed/unstable. The Error Threshold marks the build as unstable or failed if the amount of errors will exceed the specified value.
- Build result: If the JMeter test doesn’t generate the output jtl file(s), the build will be failed.
- Use error thresholds on single build: Define error thresholds for the current build.
- Average response time threshold: Set the maximum acceptable value of the Average Response Time metric.
- Use relative thresholds for build comparison: Set the percentage difference of errors. The “source” build can be either the previous build or “known good” build which is used as a baseline.
- Performance per Test Case Mode: If you need to see separate graphs for each single test case on the Performance Trend chart you can trigger the behavior by this option.
- Show Throughput Chart: Set whether to display “Throughput” trend chart on project dashboard or not.
Congratulations on adding performance tests to your continuous integration process!