Concerns About Performance Testing and Tuning
Concerns About Performance Testing and Tuning
In a rather surprising discovery, only 6% are doing continuous performance testing and no one is doing continuous UI testing.
Join the DZone community and get the full member experience.Join For Free
To gather insights on the current and future state of Performance Testing and Tuning, we talked to 14 executives involved in performance testing and tuning. We asked them,"What are your biggest concerns about performance testing and tuning today?" Here's what they told us:
Lack of Testing
- Lack of testing. No one is doing it. Only 6% are doing continuous performance testing. No one is doing continuous UI testing.
- 1) Given the current scale of systems and the drive towards continuous deployment, organizations are foregoing testing. It’s not possible nor is it practical to test everything prior to a release. This can result in code being deployed that causes unexpected incidents. Continuous monitoring is a necessary component to performance testing but should not replace QA performance testing. 2) Another big concern is the inability to see testing and performance metrics as biased. Data can easily be manipulated or framed in a way to get to the answers we want to see and hide what is really happening. Biases are a normal part of human nature and help us process large amounts of data. But understanding our biases when analyzing and conducting performance testing is critical to ensure the right decisions are made.
Lack of Integration
Load Runner and traditional performance centers of excellence are being disrupted. They’re being challenged with open source and new ecosystems. You can build everything you need in Amazon using scripting language. Traditional load testing and engineering go away. Learn new things. Adapt. Give up your comfort zone.
Performance software tools need to consider security and latency. Another WannaCry and security will be top-of-mind for everyone.
- Right now, the area of testing tends to focus on code quality and in rooting out bugs, and performance testing ends up being more of an afterthought. Instead, organizations should strive to incorporate it as part of their overall testing strategy and software engineering process.
- My biggest concerns lie with the trade-offs that come with performance testing and tuning in general. Optimizing for one workload and scenario may impair another configuration and this can lead to a range of changes that keep moving the balance back and forth. The problem never remains static for long. The most obvious is tuning for low-latency at the cost of throughput. A less obvious one is that power management testing and tuning focuses primarily on low power consumption, particularly on embedded systems. We have seen multiple examples where the performance of a workload with low CPU utilization suffered as a result which ironically may result in higher power consumption overall when executing on servers. A similar problem is related to enabling new features. It's not unusual for a new feature to be introduced that affects critical paths of a piece of software resulting in an overall regression even when that feature is not used in a particular configuration. Identifying and fixing this is a continual challenge as it's very rare that a viable solution is to remove the feature. Finally, I'm frequently asked, "How should I tune my system for HPC/database/fileserver/etc?" without any specifics about the workload and expected to give correct advice. Consider HPC as an example. If the HPC workload works heavily with large sparse matrices then an important consideration would be looking at memory usage and see if transparent huge pages or the "fault around bytes" feature are preventing the important data from residing in memory or causing reclaim-related sources of interference. If so, one could consider disabling one or both of those features so it all fits in memory, particularly if the data being analyzed is not mapped directly from storage. However, other HPC workloads that work with large amounts of dense data may benefit heavily from exactly those same features and disabling them may introduce a regression. These examples are superficial but I hope it highlights the dangers of giving generic performance-related advice without knowing more about the workload other than an impossibly vague high-level description like "HPC." It does concern me that people ignore this problem, apply generic advice and hope for the best.
Here’s who we spoke to:
- Dawn Parzych, Director of Product and Solution Marketing, Catchpoint Systems Inc.
- Andreas Grabner, DevOps Activist, Dynatrace
- Amol Dalvi, Senior Director of Product, Nerdio
- Peter Zaitsev, CEO, Percona
- Amir Rosenberg, Director of Product Management, Perfecto
- Edan Evantal, VP, Engineering, Quali
- Mel Forman, Performance Team Lead, SUSE
- Sarah Lahav, CEO, SysAid
- Antony Edwards, CTO and Gareth Smith, V.P. Products and Solutions, TestPlant
- Alex Henthorn-Iwane, V.P. Product Marketing, ThousandEyes
- Tim Koopmans, Flood IO Co-founder & Flood Product Owner, Tricentis
- Tim Van Ash, S.V.P. Products, Virtual Instruments
- Deepa Guna, Senior QA Architect, xMatters
Opinions expressed by DZone contributors are their own.