Run BlazeMeter Performance Tests in an XL Release CD Pipeline
BlazeMeter can help you automate your performance testing and shirt performance tests left in your CI/CD pipeline.
Join the DZone community and get the full member experience.
Join For FreeAs a DFL DevOps Director at Nedbank, one of my main focuses is integrating and automating the toolchain, from Continuous Integration to Continuous Deployment. The goal is to eliminate manual tasks and promote a hands-off cycle, so as to achieve economies of scale, accelerate time to market and improve quality. This blog post will explain how we added BlazeMeter performance testing to our automated XL Release CD cycle to achieve continuous testing, and how you can do it yourself.
At Nedbank, the CD tools that we use are by XebiaLabs, a DevOps toolchain vendor. We use XL Release for orchestrating the release pipeline, from development to production. We also use XL Deploy, an automation tool for deploying artifacts to systems like host containers, virtual machines, and the cloud. CI is handled through a different tools chain, which includes Jenkins, BitBucket, and Jira. The CI pipeline is tightly integrated with our CD pipeline.
We started ramping up our automation process by identifying the manual steps in our pipeline, with the goal of automating them. At that point, we looked at BlazeMeter to shift left our performance testing capabilities, earlier and more often.
However, even though XL Release did have a plugin that allowed orchestrating Performance Testing as part of the pipeline, there wasn't a community developed plugin for BlazeMeter and XL Release. But, thanks to the mature and well-defined BlazeMeter API layer, we took a stab and tried to develop an integration plugin ourselves. By doing so, we were able to add performance testing automation to our CD pipeline. We contributed the XL Release BlazeMeter plugin to the community and you can find it here. You can skip ahead to the next section to see how to use it.
Here is what our CD pipeline looks like in XL Release today:
As you can see, our CD Pipeline includes four steps:
Pre-Release: a housekeeping phase we run before deployment. The purpose of this step is to ensure all of our tasks are in the lifecycle. For example, we use this step to ensure pull-requests are associated with Jira tasks.
Development: The first environment for deploying applications. We use Ansible to set up the environment from scratch every time, we integrate with Slack to get notifications on our channels, and we use XL Deploy for code and artifact deployment.
ETE: The end-to-end testing environment. This phase includes two BlazeMeter tasks. First, we take generated test data (from the pre-release phase) and upload it to BlazeMeter. This data includes authorization tokens and other test data. The second phase is the test itself, which in this case is a functional API test. In addition, this step has acceptance tests, Jira status updates and more.
QA: A replica of production, which includes running a full-scale BlazeMeter performance tests with thresholds that will fail the deployment pipeline if we don't meet certain performance criteria. Because we can do it more often (daily) we know exactly what commits are impacting performance, giving us faster feedback and the ability to remediate it ASAP.
Our pipeline is fully automated and hands-off. Now let's see how you can use it yourself.
Go to the XL Release BlazeMeter plugin page on GitHub. It's part of the XebiaLabs Community on GitHub.
Copy the plugin JAR file into the SERVER_HOME/plugins directory of XL Release.
Configure your BlazeMeter URL and API Key in the Shared Configuration.
Go to XL Release.
Add a Test Phase and choose the 'BlazeMeter' plugin from the dropdown menu.
Fill in your test's details: API key, Test ID, Workspace ID, polling interval, etc. set your polling interval according to the length of the test.
Important: Don't forget to configure your test in BlazeMeter.
When running your test, you can open it up and get a link to a URL that will take you to the test results in BlazeMeter:
API Functional Test:
Performance Test:
We use these reports for our own analysis, and also for showing the product team where we are.
As I mentioned, these tests are automatically run a few times a day, every time we add new code. So, we are agile by running continuous testing as part of our Continuous Deployment and Continuous Integration process.
Published at DZone with permission of Jaco Greyling, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments