Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

The Primary Issue Affecting Performance Testing and Tuning

DZone's Guide to

The Primary Issue Affecting Performance Testing and Tuning

We had questions about what industry leaders saw as the main obstacles in enacting effective performance testing practices.

· Performance Zone ·
Free Resource

Sensu is an open source monitoring event pipeline. Try it today.

To gather insights on the current and future state of performance testing and tuning, we talked to 14 executives involved in performance testing and tuning. We asked them,"What are the most common issues you see affecting performance testing and tuning?" Here's what they told us: 

Best Practices

  • Incorrect use of network bandwidth across applications. Inconsistent matches between the backend and front-end service APIs hurts responsiveness. Poor use of rendering. Awareness of what happens to an application in production from the perspective of the DevOps team. 
  • They do not have best practices in place for the fundamentals – ensure valid queries before production, not testing queries. Help the team establish and execute best practices to guarantee the high performance of the app. 
  • I’m a proponent of code maintenance, so while new feature development is great, it has to have a well-maintained platform. An application that has been developed for a few years will inevitably contain inefficient and unnecessary code. Performance testing might reveal bottlenecks that come up as a result of a no-longer-needed piece of code, or might drive the developers to think of other – more efficient – ways to implement certain parts of code. 
  • Inability to get an integrated view of the infrastructure and the application
  • Issues with DNS providers because all external communications rely on external DNS providers. How proxies may function with the geographic distribution of users and locations. 
  • They configure in a way that is not optimized. We identify the optimal configuration and train users how to use the application. This varies based on the technologies the customers use (i.e., the latest mobile push is faster than SMS). 
  • 1) Deployment environment management. Poor UX information conclusion because of environmental issues – wrong version of Chrome or incorrect version of the mobile device being tested. 2) Focus testing on the use cases that matter. When testing, start with the end user in mind. 
  • Static analysis when developing code so you check before reaching QA. Always look at memory and CPU since they are leading indicators of bad performance or not using resources as efficiently as possible. Check load balance – customers, where they’re from, peaks. There are good tools available to check traffic. 
  • The most common scenario is tuning the system for low latency instead of throughput. Of those, the most common is low latency CPU responses or low-latency storage with low-latency network being occasionally an issue. Best practices normally identify the main parameters that need tuning but usually, there is a bit of additional work to specify the parameters for a specific workload and on occasion, it's specific to the machine. A recurring topic is tuning the CPU Frequency management of the system. This tends to be straight-forward once it is identified that it is required. It's not something that can be disabled by default as power consumption may be too high.
  • Lack of preparation is a common issue. Thankfully with a cloud-based approach, the impact of this is reduced as customers can start and stop their test efforts on demand. During test execution, many customers have not anticipated or planned for the size of the test being conducted. This might mean that things like intrusion detection or DDOS prevention mechanisms are triggered, hampering the test. Or it may simply mean they exhaust available capacity in a quick succession of tests. These aren't necessarily bad outcomes, as they help explain system behavior, but we do see customers caught off by it.
  • Understanding the workload being generated through to the observable metrics in the system can also be an issue for less experienced teams. An over-reliance on single metrics or narrow aspect views of the system under test can compound these types of issues. The best success is enjoyed when one understands end-to-end system performance. More often than not, the black box left out of scope, for example, a load balancer, becomes the primary culprit in unexplained poor performance. A common issue we see is a single person or entity being nominated as the performance expert. Performance has such a wide impact these days that a multi-skilled team, or ability to engage with a wider team means you will generally achieve better outcomes from your testing and tuning.

Other

  • People are tired of alerts showing a problem when there really isn’t one. This leads to alert fatigue and increased the likelihood that an important alert may be missed. Being overwhelmed with alerts and feeling like they don’t provide value is problematic.

  • The divergence of technology on-premises, in the cloud, SaaS, PaaS. How to monitor while running loads. Complex architectures with many microservices. We provide a single agent that monitors all of them transforming data into answers.
  • As cloud adoption continues to progress at an ever-increasing rate, it’s critical that organizations remember to measure performance from the edge — that is, where the end user is located — and not simply from within the cloud itself.

Here’s who we spoke to:

Sensu: workflow automation for monitoring. Learn more—download the whitepaper.

Topics:
peperformance

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}