Getting Started With Performance Testing, for Developers
Studies show that the need to improve performance is driving DevOps adoption. Learn how performance testing is integral to DevOps.
Join the DZone community and get the full member experience.Join For Free
Why do performance testing? Customer surveys, such as Interop’s 2018 State of DevOps Report, show that there’s a strong need for companies to deliver higher quality and higher performance software. In the Interop report, 73% of respondents said that the need for higher quality is a key driver for adopting a DevOps methodology.
Figure 1: Software Quality and Performance Drive the Need for DevOps, InterOp 2018 State of DevOps Report.
DevOps (and lean and agile) are all about breaking the old adage that you can have it “fast, good, or cheap, pick 2.” Today, we want all 3. The DevOps methodology has been shown to deliver higher quality software faster and at lower cost.
Along with DevOps adoption comes the requirement for closer collaboration across teams. It also means that developers must be more closely involved in the software QA process. This “shift left” means that developers are testing early in the development cycle. Generally, people are thinking about functional testing here-- i.e. unit tests. But, performance testing must also be part of the developer testing process, early in the development cycle.
Performance testing is an essential part of DevOps and Continuous Integration (CI) workflows and helps you deliver the performance needed to meet customer expectations. And, today, its all about the user experience (UX). Running continuous load tests as part of your CI pipeline allows you to see performance trends over time. This way, you can make sure that code, infrastructure, or other changes aren’t introducing a performance regression.
The bottom line is that performance testing helps both the top line (revenue) and the bottom line (profitability) of your business. Grow your top line through higher user satisfaction with your website or application. Grow your bottom line by reducing development costs.
Performance Testing 101
What is performance testing? In the simplest terms, performance testing is about applying a load to your system to see how it responds. There are several different types of performance tests, including:
- Load Tests: Test for an expected / normal amount of traffic
- Stress Tests: Understand the failure modes of the system at high/stress loads-- 2x to 3x normal loads, or more
- Endurance/Soak Test: Find issues (like memory leaks) over an extended period of operation
Generating Load With Virtual Users
When it comes to performance testing websites and web apps, the way we apply the load is by generating Virtual Users (VU). Virtual Users operate concurrently. They simulate the behavior of real users that visit your site or use your app. This behavior can be simple or very complex, depending on your application.
Users may need to login and then perform various functions. If you have an ecommerce website, the users may put items into a shopping cart and then go through a checkout process. For a SaaS application, there are any number of things that a user might be doing in your application. For a CRM application, they may be adding a contact or sales opportunity or running a report.
Figure 2: Virtual User (VU) Ramp Up for a Stress Test.
What Problems Can Performance Testing Uncover?
Performance testing is focused on the performance of your backend software, servers, and infrastructure. Here are a few of the issues you can find with performance testing:
- Code bottlenecks and concurrency bugs
- Infrastructure limitations — bandwidth, disk I/O, etc.
- Web server configuration constraints
- Licensing constraints, i.e. not enough licenses for the level of traffic for software such as firewalls
- Slow APIs
- Performance regressions — declines in performance over time; hard to catch without performance trend analysis across multiple iterations of the same tests
In addition to the above, load testing, and the corresponding performance optimizations you implement as a result, can help you to reduce your SaaS infrastructure costs.
What do you want to test? Website or web app? API? Microservice? As noted above, for websites and web apps, we generate Virtual Users to create the load. For an API, you generally are more interested in generating Requests per Second (RPS), rather than focusing on the number of VUs. Each Virtual User can typically generate ten or more requests per second, depending on the test scenario. So, you don’t need as many VU’s to generate a certain number of RPS.
Load Testing Methodology
Requirements: Start with defining your requirements. What parts of your system are most critical to performance? Ultimately, what parts affect the user experience from a performance perspective? This question may be hard to answer early-on. So, make your best guess at the critical components. For example, what API endpoints are utilized the most? Once you’ve identified the key components, what are the response time targets for each? You also need to consider the most common user journey’s or sets of user actions that connect the critical components. This will form the basis of your load tests.
Capture a User Journey: For website or web app testing, you can easily capture a realistic user journey by recording a browser session. (More on this below). Start small-- our “manifesto” is to start small and work up from there when building your test scenarios. Small tests are better than no tests and it will be easier for you to get started and figure out how things are working.
Baseline Test: Next, run tests to establish a baseline for performance at “normal” or ideal loading. The baseline test may only require a small number of VUs (e.g. 10). This will be used as a basis for comparison for later tests. Typically, you’ll want to run your baseline test for at least 5 to 10 minutes. Baseline tests let you establish the threshold levels that can be used to trigger pass/fail results. (This will be important later when you integrate load testing into your Continuous Integration pipeline).
Run a Larger Test: Bump up the number of VUs and run the same test scenario as in the baseline test. Compare results. Did your response time go up significantly?
Rinse and Repeat: Load testing requires a lot of iterations. Repeat the above process for other user journeys, for example. Check your performance trend over many iterations of the same set of tests.
Run a Stress Test: See how your application performs under a high load. Iterate your stress test.
Automate: Integrate your load testing tool into your Continuous Integration (CI) pipeline. Run nightly load tests as part of your regular build process. Run larger QA tests less frequently as part of your pre-production testing process. Larger load tests take time and may not be practical as part of your nightly builds.
There are at least three ways to create performance tests:
First, run a simple URL test where you specify the URL of the website/web app/etc. to be tested, as shown in Figure 3 below.
Figure 3: URL Test Creation.
Second, capture a user journey by recording a session in your browser, for example, by using a Chrome extension, or use other tools that create a HAR file. This method allows you to create very realistic user scenarios where the Virtual Users will perform exactly the same actions as in the recording. The load testing tool will automatically convert the recorded session into a test script.
How Many VUs Do You Need?
We often find that people will overestimate the number of Virtual Users needed to test their site or application. Calculate the number of Virtual Users (VU) you need for your load test using this formula:
Virtual Users = (Hourly Sessions x Average Session Duration in seconds) / 3,600
Google Analytics, Adobe Analytics and other tools are great for seeing what your average and peak hourly traffic looks like. You may want to create your load test at 2x to 3x above your typical peak. If you’re running a stress test, as discussed above, your loading could be much higher.
Test Execution Modes
With tools such as k6, you have two test execution modes-- Local and Cloud Execution.
Local Execution: This mode is great for developers. Load test early in the development cycle. Run small tests behind the firewall. This is command line driven. Stream your (k6) test results to Load Impact Insights or other tools, such as Grafana, for analysis (more on results analysis below). Local execution with k6 requires that you download and install k6 in your local environment.
Cloud Execution: This mode provides access to a distributed global network of load generators. Generate loads from multiple locations around the world (what we call “load zones”)-- to make sure that your customers/users in all regions get good performance. Cloud execution allows you to run much larger load tests (i.e. more Virtual Users) than you can typically run on your local machine.
A common question is- ‘how long should my test run?’ The first requirement is that your tests must run long enough for the system to stabilize under the applied load. Here are some guidelines for test duration depending on the target:
- Website and Web App tests- run long enough to complete 2-5 iterations of the user scenario for all VUs.
- API tests- run for 5+ minutes for simple API endpoint tests.
More Advanced Test Scripting Considerations
Sleep (aka Think) Time
When you are performance testing a website or web application, you typically need to insert some pauses that represent the time that a real person would spend looking at something on the page or thinking about it. This is called “sleep time” and causes the virtual users to pause between actions. Sleep time is also used to control the number of requests per second (RPS) for API testing.
Parallel Thread Processing/Multiple Parallel Requests
Modern load testing tools like k6 can generate parallel requests per virtual user. Older tools such as JMeter can only generate a single request at a time. The advantage of generating multiple parallel requests is that it more closely models how the browser works. They open multiple connections to each domain, usually in the range of 2 to 8 per domain. It’s important to test systems with a realistic number of TCP connections since they are a finite resource.
It’s often the case that you need the ability to import parameterized data from a file. This is useful for cases such as testing a site that requires a login. The data file can contain Usernames and Passwords, for example.
You generally don’t want to include 3rd party services such as Content Delivery Networks (CDNs), payment processors, and others, in your load tests. First, because you have no control over the performance of these services. And second, because the 3rd party likely will not take kindly to you loading down their service.
Interpreting Your Performance Test Results
What’s a “good” test result? What’s a “bad” one? Generally, the following should occur for a positive test result:
- The request rate (throughput) should follow the number of active VUs
- The response time should stay stable under increasing VU loading
- The request failure rate is within your accepted range
- Tests haven’t exceeded Thresholds that you have set
The example shown in Figure 5 illustrates a good test result where the response time (blue line) stays relatively flat and stable throughout the test.
Figure 5: Graphical View of Load Test Results.
Figure 6 shows an example where the response time (blue line) starts to increase dramatically after the test gets to the point where there are more than about 4200 active VUs. This system is overloaded.
Figure 6: Test Results Show that this System Under Test is Overloaded.
Scalability of Your Performance Testing Methodology
Everything Is Code
Modern load testing tools facilitate collaboration across teams, another major tenet of the DevOps methodology. With a Software as a Service (SaaS) load testing tool, you can store your test results in the cloud and easily share those results for analysis and decision making.
Automation, CI Integration, and Continuous Testing
DevOps puts a lot of focus on automation. Automating your load testing is a key step in maturing your process and fits nicely into a DevOps methodology. Load testing is easily integrated with many continuous integration (CI) tools to automate the process.
We recommend that you run baseline performance tests as part of a daily build sequence. How you do that may vary from CI tool to CI tool. No matter how you trigger it, in essence, your CI integration will tell your CI tool to trigger the external performance test. In turn, your performance test script will perform the test or tests you want and return its results. In most CI integrations, that result will be a simple pass/fail.
k6 has a feature called Thresholds that allows you to create a pass/fail outcome that can be used in this CI pipeline integration.
It’s up to you in your CI integration to decide if a fail passback is enough to fail the whole build. Regardless, you’ll also want to be notified of a test failure via your collaboration tool of choice - like Slack or email.
In that scenario, your CI integration needs to trigger the performance test, receive the test pass/fail notification, behave appropriately based on that passback notification, log the result, and, in some cases, notify you via a communications channel.
As noted earlier, performance testing requires many iterations. To be most effective, it needs to be part of a continuous testing methodology that runs throughout your software development lifecycle.
Opinions expressed by DZone contributors are their own.