Your Performance Testing Is Biased
Your Performance Testing Is Biased
A performance expert talks about how performance testing is biased and takes on the biases of those who design the tests.
Join the DZone community and get the full member experience.Join For Free
Sensu is an open source monitoring event pipeline. Try it today.
The following article was written by Tim Koopmans.
Your performance testing is biased. It’s not a criticism; it’s a reality. In fact, all performance testing (and all testing, for that matter) is inevitably biased. Let’s confront the elephant in the room and take a hard look at what those biases involve—as well as how they ultimately impact the accuracy and efficacy of our performance testing.
As human beings, we’re all guilty of cognitive biases. A cognitive bias refers to “a systematic pattern of deviation from norm or rationality in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion.” Cognitive biases are part of the human condition; we can only hope to be aware of our biases, not eliminate them. Even in the relatively narrow context of performance testing, we’re susceptible to selecting the story we find interesting, compelling, familiar, and/or confirmatory – and then using data to support the narratives we like.
Many people who work in IT think they know how to performance test: “Just apply the same load as production in a test environment, and then you know if the system will scale and we can go live.” But consider:
- How many assumptions and shortcuts do we take in building our load model?
- How many factors do we ignore in declaring equivalency between production and test environments?
- How thorough is our analysis going to be?
- What will we really decide to do – or not do – based on the results?
The fields of software/hardware/network/systems engineering are stuffed with common biases. Both these and biases more closely associated with performance testing/engineering are worthy of reflection.
- What about social biases? Ever spent more time and energy discussing metrics that all members are already familiar with? That could be considered Shared Information bias.
- Ever bolstered or defended the status quo, or been on the other side of that? System Justification bias.
- How about memory biases? Forgetting information that can otherwise be found online or is recorded somewhere? Google Effect/Digital Amnesia.
- How about postmortem analysis of an event becoming less accurate because of interference from the post-event news of how your site crashed and pressure from management? Misinformation Effect.
- Some performance testers still push back against virtualized and cloud load injectors, well after modern computing has moved on. If you ask why, there is typically an anecdote of resource overcommitment at the virtualization host level, so now it is impossible to trust data that isn’t generated by physical machines (with their own operating system background processes, resource contention, etc). Anchoring/Focalism.
- Unprofitably spending time debating individual script steps and test data characteristics, instead of researching the load model? Debating a stray CPU spike instead of researching a flood of errors midtest?
- Continuing to run tests that don’t yield useful information, or trying to salvage obsolete test artifacts, because someone thinks a lot of valuable time was spent building them? Sunk Cost.
Published at DZone with permission of Cynthia Dunlop , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.