How to Apply the 80:20 Rule to Performance Testing
How to Apply the 80:20 Rule to Performance Testing
Performance maintenance is essential, and accordingly performance testing is hugely necessary. The performance testing space continues to evolve however, in this day of continuous delivery and integration. Here's an overview of the 80:20 rule as applied to performance testing, from a fear of testing to testing complex scenarios.
Join the DZone community and get the full member experience.Join For Free
Built by operators for operators, the Sensu monitoring event pipeline empowers businesses to automate their monitoring workflows and gain deep visibility into their multi-cloud environments. Get started for free today.
Performance testing continues to change rapidly in the age of continuous integration and continuous delivery. Yet many companies are still apprehensive of large performance tests, as they are seen as daunting and time-consuming. This post will argue for teams to embrace integrating performance testing earlier in the development cycle, even if it’s not fully testing all the flows in the website or application, as the resulting data will still be very valuable.
The Fear of Testing
Despite the transformation of software development cycles, in many cases, performance testing, if done much at all, still remains relegated to the later stages of the product’s lifecycle. Part of the reason for this is fear. Testing is scary. It’s difficult just getting started with unit, integration and functional testing right now, and for many users, adding performance testing on top of that sounds incredibly hard, because it requires a different set of expertise to even get started, and in addition to that, there is the impression that it takes a lot of time.
Back in the days before Agile and continuous integration, when the waterfall development model determined that software was built and then made stable in the end, we were able to get a performance expert to work with the dev teams and create a realistic simulation of the user traffic (flows + calculate number of users for each). However, this not only consumed a lot of time and resources, you also had to wait to get value from the test results and then start fixing performance issues, which is harder to do in the late stages of the development process.
Using the 80:20 Approach to Performance Testing
In a world where applications are constantly changing, it’s ineffective and inefficient to have such an approach. The frequent build and release process of continuous integration necessitates finding performance bottlenecks early, and having an easy way to overcome them. I suggest being inspired by the 80:20 rule, the Pareto Principle, and start testing small and early.
Teams should spend 20% of their efforts and get 80% of the knowledge needed to get started. You’ll have more tests, which will be easy to run many times at various stages of the development process, which will, in the big picture, give plenty of data to work with. Connecting simple scenarios to CI pipeline ensures you’ll keep on learning, as the app continues to evolve.
Other ways to save time and be able to test faster that companies have been increasingly adopting including a shift toward open source tools for performance testing, and leveraging the cloud to run load tests globally at massive scale.
It’s important to accept that while teams want to get the data of their performance tests 100% accurate (including number of concurrent users, the functions being triggered, and requests per second, as well as covering all the flows), you don’t have to adhere to this all-or-nothing strategy. In the “good enough” approach, there is room to allow for early testing, and it’s also easier and cheaper to find and fix performance issues early on. When working with CI pipelines we have the chance to run a lot of tests, all of the time, so even if the tests and results are incomplete, you will still get value.
Work Toward Testing Simple Scenarios
So you might be asking, how we can practically apply this in real scenarios? For example, In approaching running performance tests on a static site, instead of creating flows, you can just hit random pages (you might want to choose to go test according to the most visited pages, for example) and then iterate a list. Similarly, when looking at an API test, a mobile app or microservices, just hit endpoints, not flows. Additionally, work toward testing simple scenarios first. For example, if several API endpoints are required to a business transaction, simulate it.
I want to demonstrate how to do this with BlazeMeter. The basic scenario of testing a static content site can be solved by BlazeMeter’s URL test. In it, you can list all the resources you’d like to test, set the number of users, and click on ‘play’ to generate the traffic.
You can also choose to create a very basic test using JMeter to test your REST API. Now you can go to my CI tool of choice, and configure BlazeMeter’s plugin for it. This will enable you to add a BlazeMeter test step to your build process, in which you can configure a performance test to run on your application when it’s built. You can learn more about this process in BlazeMeter’s documentation.
Not only you’ll be able to view the results on each specific build in BlazeMeter, and even set fail criteria on the test (which your CI tool can later use to help determine if the build is a successful build or not), but you will also see the performance trends of your application, and how it progressed between the different builds. So even if your application’s performance is up to par, you will be able to predict how it’ll behave in later builds, and solve issues even before they occur.
Next Steps: Testing Complex Scenarios
With all that said, “large” performance tests are still very much needed. It is still important to try and run a real world simulation, a complex scenario with real user flows. Taking the ‘good enough’ approach, you can add another easy step in between. Take all the basic scenarios that you created, and you continue to run, and configure them to run simultaneously, thus using the scenarios you already have, and start getting data to work with.
Along these lines, you have the ability with BlazeMeter to run several basic tests in parallel, with each testing something else (i.e loads a specific part of the app), enabling to you easy to get a more comprehensive view of how the app behaves. BlazeMeter’s multi-test feature allows you to link several of your basic tests, run them simultaneously, and see all the results on the same graph.
Taking advantage of the CI plugin is also possible here, by simply configuring the BlazeMeter step to start the new multi-test you’ve just created, to run with your build. This way, you’ll get more of the same BlazeMeter reporting goodness, but on a much more advanced scenario.
So to sum up, while trying to get the 100% coverage for your test is a great goal to have - and it is very much important - it’s also not always the best approach in the short term. You can get a lot of info, and fix most issues, by running simple tests. When approaching performance testing in the CI process, just do it. Test early, test small, but just test!
Published at DZone with permission of Jason Silberman . See the original article here.
Opinions expressed by DZone contributors are their own.