Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Why Load Testing is So Hard

DZone's Guide to

Why Load Testing is So Hard

Load testing isn't easy; we get it. So now we're going to take a look at why load testing can be such a pain, and how we can work with it.

· Performance Zone ·
Free Resource

xMatters delivers integration-driven collaboration that relays data between systems, while engaging the right people to proactively resolve issues. Read the Monitoring in a Connected Enterprise whitepaper and learn about 3 tools for resolving incidents quickly.

Performance matters.

Today, a poorly-performing app isn't just cursed at—it's abandoned. Performance issues don't just taint the user experience—they directly impact the success of the app you've worked so hard on.

So why aren't teams religiously checking that their latest updates don't degrade performance? Because load testing is hard. It's tedious to create and maintain load testing scripts for even a relatively simple app. And the faster the app is evolving, the greater the pain.

Open source load testing tools like JMeter and Gatling allow developers and testers to load test without cost-prohibitive tools and infrastructures. However, to be honest, these tools won't start (or continue) to deliver valuable feedback unless you're willing to dedicate a fair amount of time to them. Most of us just aren't too excited about working on things like resolving hundreds of protocol-level discrepancies for each step of a very simple test.

In this 2-part blog series, I'm going to look at why load testing is so hard, then introduce a simpler, faster approach designed specifically for developers and testers working in Continuous Delivery/DevOps environments.

The traditional way of approaching load test scripting is at the protocol level (e.g., HTTP). This includes load testing with open source tools such as JMeter and Gatling, as well as legacy tools including LoadRunner. Although simulating load at the protocol level has the advantage of being able to generate large concurrent load from a single resource, that power comes at a cost. The learning curve is steep and the complexity is easily underestimated.

The main culprit for this complexity is JavaScript. In 2011, there was usually less than 100 KB of JavaScript per page, which spurred around 50 or fewer HTTP requests. Now, that's doubled: We see on average 200 KB of JavaScript per page, and this gives us more than 100 requests per page.
Just running a search on a simple search page involves things such as XML HTTP requests processed asynchronously after page load. You also find things such as dynamic parsing and execution of JavaScript, the browser cache being seeded with static assets and calls to content delivery networks.

For a more business-focused example, consider the SAP Fiori demo app. Assume we want to load test two simple actions: navigating to a page and then clicking on the "My Inbox" icon. This actually generates more than 120 HTTP requests at the protocol level.

When you start building your load test simulation model, this will quickly translate into thousands of protocol-level requests that you need to faithfully record and then manipulate into a working script. You must review the request and response data, perform some cleanup and extract relevant information to realistically simulate user interactions at a business level. You can't just think like a user; you also must think like the browser.

You need to consider all the other functions that the browser is automatically handling for you and figure out how you're going to compensate for that in your load test script. Session handling, cookie header management, authentication, caching, dynamic script parsing and execution, taking information from a response and using it in future requests...all of this needs to be handled by your workload model and script if you want to successfully generate realistic load. Basically, you become responsible for doing whatever is needed to fill the gap between the technical and business level. This requires both time and technical specialization.

You might be thinking, "Okay, we'll use 'record and playback' tools, then." Theoretically, you could just place a proxy between your browser and the server, record all the traffic going through and be set. Unfortunately, it's not quite that simple. Even though you're interacting at the UI level, the testing is still based on the protocol level. Assume we were looking at the traffic associated with one user performing the simple "click the inbox" action described above. When we record the same action for the same user two different times, there are tens, if not hundreds, of differences in the request payload that we'd need to account for.

Of course, you can resolve those differences with some effort. Unfortunately, when the application changes again, you're back to square one. The more frequently your application changes, the more painful and frustrating this becomes.

Discovering, responding to, and resolving incidents is a complex endeavor. Read this narrative to learn how you can do it quickly and effectively by connecting AppDynamics, Moogsoft and xMatters to create a monitoring toolchain.

Topics:
performance ,load testing ,load testing difficulties ,performance testing

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}