Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Concerns About Performance Testing and Tuning

DZone's Guide to

Concerns About Performance Testing and Tuning

In a rather surprising discovery, only 6% are doing continuous performance testing and no one is doing continuous UI testing.

· Performance Zone ·
Free Resource

Container Monitoring and Management eBook: Read about the new realities of containerization.

To gather insights on the current and future state of Performance Testing and Tuning, we talked to 14 executives involved in performance testing and tuning. We asked them,"What are your biggest concerns about performance testing and tuning today?" Here's what they told us: 

Lack of Testing

  • Lack of testing. No one is doing it. Only 6% are doing continuous performance testing. No one is doing continuous UI testing.
  • 1) Given the current scale of systems and the drive towards continuous deployment, organizations are foregoing testing. It’s not possible nor is it practical to test everything prior to a release. This can result in code being deployed that causes unexpected incidents. Continuous monitoring is a necessary component to performance testing but should not replace QA performance testing. 2) Another big concern is the inability to see testing and performance metrics as biased. Data can easily be manipulated or framed in a way to get to the answers we want to see and hide what is really happening. Biases are a normal part of human nature and help us process large amounts of data. But understanding our biases when analyzing and conducting performance testing is critical to ensure the right decisions are made.

Lack of Integration

  • Different aspects of an application are tested using different tools. The tools that are out there are great, but gluing everything into a single performance testing suite that tests everything at the same time, all while increasing the load of the system, is quite cumbersome. On top of that, when your company’s product changes so much that a previous performance testing suite is inadequate, sometimes it will require coming up with a completely new way of testing it.
  • The lack of integration between applications teams and infrastructure teams
  • While there are platforms enabling customers to get started with load testing early and continue it through a DevOps type process, my biggest concern still lies with an over-dependence on the performance test "expert." Nobody can reasonably expect a single person to be across such a wide variety of platforms and technologies that are common to production systems these days. Performance testing and tuning is a shared responsibility.

Other

  • Load Runner and traditional performance centers of excellence are being disrupted. They’re being challenged with open source and new ecosystems. You can build everything you need in Amazon using scripting language. Traditional load testing and engineering go away. Learn new things. Adapt. Give up your comfort zone.

  • Performance software tools need to consider security and latency. Another WannaCry and security will be top-of-mind for everyone.

  • Right now, the area of testing tends to focus on code quality and in rooting out bugs, and performance testing ends up being more of an afterthought. Instead, organizations should strive to incorporate it as part of their overall testing strategy and software engineering process.
  • My biggest concerns lie with the trade-offs that come with performance testing and tuning in general. Optimizing for one workload and scenario may impair another configuration and this can lead to a range of changes that keep moving the balance back and forth. The problem never remains static for long. The most obvious is tuning for low-latency at the cost of throughput. A less obvious one is that power management testing and tuning focuses primarily on low power consumption, particularly on embedded systems. We have seen multiple examples where the performance of a workload with low CPU utilization suffered as a result which ironically may result in higher power consumption overall when executing on servers. A similar problem is related to enabling new features. It's not unusual for a new feature to be introduced that affects critical paths of a piece of software resulting in an overall regression even when that feature is not used in a particular configuration. Identifying and fixing this is a continual challenge as it's very rare that a viable solution is to remove the feature. Finally, I'm frequently asked, "How should I tune my system for HPC/database/fileserver/etc?" without any specifics about the workload and expected to give correct advice. Consider HPC as an example. If the HPC workload works heavily with large sparse matrices then an important consideration would be looking at memory usage and see if transparent huge pages or the "fault around bytes" feature are preventing the important data from residing in memory or causing reclaim-related sources of interference. If so, one could consider disabling one or both of those features so it all fits in memory, particularly if the data being analyzed is not mapped directly from storage. However, other HPC workloads that work with large amounts of dense data may benefit heavily from exactly those same features and disabling them may introduce a regression. These examples are superficial but I hope it highlights the dangers of giving generic performance-related advice without knowing more about the workload other than an impossibly vague high-level description like "HPC." It does concern me that people ignore this problem, apply generic advice and hope for the best.

Here’s who we spoke to:

Take the Chaos Out of Container Monitoring. View the webcast on-demand!

Topics:
performance testing and tuning

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}