Getting Started With Load Testing
Getting Started With Load Testing
Read on to get an expert's opinion on the shift left movement in testing, the best frameworks for a given situation, cloud-based load services, and more!
Join the DZone community and get the full member experience.Join For Free
I recently did a webinar that introduced some beginner through intermediate techniques on getting started with cloud load testing. Attendees raised some interesting questions, so I wanted to aggregate and answer the top questions here.
When should load testing shift left and when should it shift right?
There is certainly a lot of buzz in the industry about shifting left which can be interpreted mostly as performing testing (including load testing) earlier and continuously within the development lifecycle — particularly for DevOps.
Likewise, the case can also be made that shift right (testing in or close to production environments), is also a good home for load testing.
Personally, I care less about the direction to shift and more about getting sh*t done.
At Flood, we've seen customers go both directions here.
Some customers run large scale load tests in production — primarily because that's the only environment available at the appropriate scale. Load testing in this environment has some drawbacks. Repeatability can be difficult, and there are important factors to consider such as logging, monitoring, and test data management. It does offer valuable insight into how production environments scale and often uncovers a raft of performance issues that might be more difficult to identify in a non-production environment — for example, rate limits, throttles, intrusion/DDOS detection, CDN, and cache performance.
We also have customers who invest heavily in shift left styles of load testing. Most common are customers using our API to integrate load testing with their continuous integration and deployment pipelines. This offers earlier detection of performance defects, sometimes in a more controlled test (smaller scale/dedicated tuning). It can quickly detect configuration issues in the application/infrastructure design without the production noise. With this approach the feedback loop is much tighter, which is consistent with test early, test often schools of thought.
Why are cloud-based load testing services better than the "legacy" thick client load testing tools everyone’s Been using for years?
Let's avoid the debate about which tool is "better" — I think a more useful approach is to consider the differences in terms of load test creation, execution, and analysis.
Traditional load testing tools were generally full-featured, shrink-wrapped software. With a commercial license, they would be difficult to share or make available to other colleagues. Test scripts would be created with closed source or proprietary tools. Execution carried significant overhead for provisioning and infrastructure costs. Reporting would be done retrospectively with analysis not easily shared outside your team.
Cloud-based load testing generally supports non-commercial, open source tools with no vendor lock-in. Infrastructure is provisioned on demand or reserved with significant discounts. Reporting and analysis happen in real time and are easily shared via the web.
What’s the value of integrating load testing into CI/CD pipelines?
Continuous Integration provides tight feedback loops, which are great for all forms of automated testing. It provides the mechanism from which to execute load tests and feed results back into your decision-making process.
Many Flood IO customers use our API to integrate with popular CI platforms like Jenkins and Buildkite. This lets them automate the provisioning of load test infrastructure, execution, and analysis of results. Some customers take the results integration one step further, flagging tests which have failed to meet SLAs or exceed thresholds.
Our roadmap has some great features planned around CI/CD pipelines.
We have dedicated performance testers. How would developers and testers do load testing without stepping on their toes?
It's important to acknowledge that performance is everyone's responsibility.
From the marketing team adding some more trackers to the site, to the front-end developer changing up the CSS or JS framework, all the way through to backend developers creating APIs and application services, as well as operators of caches, databases, servers, networks, and storage - everyone has a play on performance.
Don't get your knickers in a knot over who is responsible for performance. Load testing should not be exclusive. Everyone needs to be involved.
A healthy dialogue between a developer and tester might be, "Hey, I've been exploring how this endpoint behaves under load and I noticed it tends to slow down over time when we search on a wildcard."
We really believe in building a distributed load testing platform for everyone and encourage novices to experts to be involved, regardless of job title.
If I don’t have any load tests yet, where should I get started? What tool is easiest to learn?
To simulate Protocol Level Users (for example, HTTP), you can't go past JMeter or Gatling. Protocol-level scripting is not trivial; beware of record and playback myths. You will need a solid understanding of HTTP and the way your application behaves to be proficient.
JMeter is the most popular load testing tool on our platform. It has been around since 1998 and has really gathered steam since version 2 in 2007. There are 10+ years of solid information out there to dip into. It also has a UI, which is handy if you're not really into writing code. You might also want to check out Ruby-JMeter, which we developed and open sourced. It's been very popular with customers wanting to express JMeter test plans in code.
We also love Gatling for its simplicity and powerful design. Tests are written in Scala, which has its own learning curve. Customers who use Gatling tend to have more of a, "I like writing code" background in general. However, if you have no strong preference either way (coding vs. UI) then we'd definitely recommend checking it out.
For Browser Level Users we've been experimenting with Selenium for over a year now and we have a strong customer base invested in load testing with it. It's popular because you're simulating user behavior in a browser, which can be easier than protocol level scripting. This reduction in complexity comes at the cost of concurrency. Stay tuned for some interesting progress we have made in this space.
Published at DZone with permission of Tim Koopmans . See the original article here.
Opinions expressed by DZone contributors are their own.