Over a million developers have joined DZone.

Your Performance Testing is Biased

DZone's Guide to

Your Performance Testing is Biased

A performance expert talks about how performance testing is biased and takes on the biases of those who design the tests.

Free Resource

Container Monitoring and Management eBook: Read about the new realities of containerization.

The following article was written by Tim Koopmans

Your performance testing is biased. It’s not a criticism; it’s a reality. In fact, all performance testing (and all testing, for that matter) is inevitably biased. Let’s confront the elephant in the room and take a hard look at what those biases involve—as well as how they ultimately impact the accuracy and efficacy of our performance testing.

As human beings, we’re all guilty of cognitive biases. A cognitive bias refers to “a systematic pattern of deviation from norm or rationality in judgment, whereby inferences about other people and situations may be drawn in an illogical fashion.” Cognitive biases are part of the human condition; we can only hope to be aware of our biases, not eliminate them. Even in the relatively narrow context of performance testing, we’re susceptible to selecting the story we find interesting, compelling, familiar, and/or confirmatory – and then using data to support the narratives we like.

Many people who work in IT think they know how to performance test: “Just apply the same load as production in a test environment, and then you know if the system will scale and we can go live.” But consider:

  • How many assumptions and shortcuts do we take in building our load model?
  • How many factors do we ignore in declaring equivalency between production and test environments?
  • How thorough is our analysis going to be?
  • What will we really decide to do – or not do – based on the results?

The fields of software/hardware/network/systems engineering are stuffed with common biases. Both these and biases more closely associated with performance testing/engineering are worthy of reflection.

  1. What about social biases? Ever spent more time and energy discussing metrics that all members are already familiar with? That could be considered Shared Information bias.
  2. Ever bolstered or defended the status quo, or been on the other side of that? System Justification bias.
  3. How about memory biases? Forgetting information that can otherwise be found online or is recorded somewhere? Google Effect/Digital Amnesia.
  4. How about postmortem analysis of an event becoming less accurate because of interference from the post-event news of how your site crashed and pressure from management? Misinformation Effect.
  5. Some performance testers still push back against virtualized and cloud load injectors, well after modern computing has moved on. If you ask why, there is typically an anecdote of resource overcommitment at the virtualization host level, so now it is impossible to trust data that isn’t generated by physical machines (with their own operating system background processes, resource contention, etc). Anchoring/Focalism.
  6. Unprofitably spending time debating individual script steps and test data characteristics, instead of researching the load model? Debating a stray CPU spike instead of researching a flood of errors midtest?
  7. Continuing to run tests that don’t yield useful information, or trying to salvage obsolete test artifacts, because someone thinks a lot of valuable time was spent building them? Sunk Cost.

Take the Chaos Out of Container Monitoring. View the webcast on-demand!

performance testing ,load testing ,performance analysis ,performance and monitoring ,performance

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}