To gather insights on the current and future state of performance testing and tuning, we talked to 14 executives involved in performance testing and tuning. We asked them,"What are the most significant changes to performance testing and tuning in the past year?" Here's what they told us:
- The two most significant changes are the cloud and the move to continuous delivery. IT departments thought when applications moved to the cloud the need for monitoring and testing would move to the cloud vendor as well. They are now realizing that they are still being held responsible for incidents and performance, and need a way to identify when problems are occurring.The increased move toward continuous delivery is also requiring companies to rethink their testing strategy. Testing cycles are being shortened which can result in more issues being found in production. Having systems in place to continuously test and monitor production can help alleviate the shorter test cycles.
- DevOps has driven continuous delivery when cloud has driven more interest in end-to-end testing (user journey focus). User acceptance teams are in the product organization to test from the user perspective.
- Performance engineering is being disrupted with containers, cloud, PaaS, AI, and serverless. Capacity engineers and performance engineers need to learn the importance of using monitoring tools – ideally, the same tools are used in pre-production and production so that you become familiar with them and trust the results they are providing. Work with operations to learn from production.
- Cloud-based infrastructure gives us a never-before-realized economy of scale, where we can load test production sized systems with production sized load and beyond. The throwaway nature of cloud-based resources means that we can quickly scale to simulate demand, in response to questions that we ask through the course of load testing. I would say that cloud-based load testing means the testing itself has become more exploratory. I am seeing more scenarios generated which are dictated by results observed through testing itself. This is a departure from the more statically-defined performance test strategies of the past. In terms of tuning, the application performance management space has really blossomed. There is such a wide variety of tools and platforms available to customers. We are no longer locked into one toolset or approach. This also extends to the ways in which we generate load. Open source tools like JMeter and Gatling are increasingly popular. There are plenty of commercial tools and platforms available too. A competitive market gives customers plenty of options and I would say performance testing is much more accessible than it was a decade ago.
Let's look at some of the changes related to complexity, including:
Shift left concerns around quality performance and security testing, responsive web is downloading the wrong images to the device, incorrect use of bandwidth.
Repetitive downloads of files.
Controlling third-parties of which there are 12 to 15 in every app.
A single code line is more efficient at the end of the day.
Unite all screens with a single line of code.
The last 12 months we’ve seen a hockey stick with powerful client-side apps thanks to HTML5 and the UX expectations from apps and sensors (e.g. touch and facial identification). Streamline the login and key transactions. We did for one bank in EMEA and there was 80% adoption of the app in three days. Use of voice interfaces – Bank of America is introducing Erica, a voice-driven personal assistant. We ensure the sensors between the backend and the APIs on the front end are up to speed to provide the best UX and speed for performance. HTML5 replaces manual testing with photos – it’s faster and eliminates errors. Responsiveness and friendliness of the app drives adoption. The younger generation is willing to give up privacy for a smooth and easy experience.
- There are multiple choices of open source databases. Use multiple databases to accomplish your goals – MySQL, Redis, ElasticSearch, Hadoop. The increased complexity of open source databases MySQL is much more complex than 10 years ago with features driving a non-linear increase in complexity. Developers are just scratching the surface of what a database can do and how performance can be optimized. Experts have more magic up their sleeves to use database features most efficiently. Cloud and DBaaS cover the operational basics. More advanced adjustments need to be made to the architectural design and indexes to the maximize opportunities and ROI.
- The complexity and scale of applications and infrastructure.
- Increase in the legs of communication. Branch offices are now communicating directly. There’s a continuous upswing in communications without a tidy wall between applications and operations.
- The biggest changes to performance testing and tuning are new capabilities that connect the dots for the IT administrator and make it easier and simpler to understand how to use all this data to transform IT strategy. In the case of Nerdio, for example, we focus on optimizing the IT environment. We keep our eyes and ears to the ground at the user experience level to not only improve performance but ultimately save organizations a lot of time and money.
- Instead of focusing on the same base scenario to produce response times version-to-version, we shifted to heavily loaded environment specifically to produce bottlenecks that we might find as the load increases.
- We used to rely on getting customer databases and realized this was just a snapshot. We now focus on best/worst/average use case scenarios to define the load profiles. We create an assimilation environment similar to the traffic in production.
- Security is taking a toll on performance. Keep standards high while still delivering a great customer experience regardless of where you or the customer are located.
- The question is broad as one person’s important may be another person’s irrelevant but I can answer based on the experiences of my team. The most significant change within the last year was using more advanced statistical methods for detecting regressions. Performance testing and tuning are rarely straight-forward as it may depend on a specific machine or the workload in question might be highly variable, bi-modal or multi-modal. For example, a workload may have two or more distinct levels of performance where basic methods cannot reliably detect a change in performance. We now use a variety of methods depending on the workload for detecting regressions. It's time-consuming to do this for each workload but it tends to pay off. In terms of tuning, an important realization was how modern hardware and kernel developments have altered the interaction between workloads the CPU frequency management. CPU scheduler changes to improve latency, the dispatching of work to threads (either application or kernel threads) and advances on how individual cores can control their power usage has resulted in a situation where a workload can be busy but each individual core has relatively low utilization. This can lead to a lower CPU frequency being used and the workload suffers overall. Unfortunately, it's not a case of just disabling CPU frequency management or limiting it as that can prevent a CPU using the highest possible speed which limits peak performance. Understanding this problem, modifying the software to cope and updating tuning recommendations is a challenge.
Here’s who we spoke to:
- Dawn Parzych, Director of Product and Solution Marketing, Catchpoint Systems Inc.
- Andreas Grabner, DevOps Activist, Dynatrace
- Amol Dalvi, Senior Director of Product, Nerdio
- Peter Zaitsev, CEO, Percona
- Amir Rosenberg, Director of Product Management, Perfecto
- Edan Evantal, VP, Engineering, Quali
- Mel Forman, Performance Team Lead, SUSE
- Sarah Lahav, CEO, SysAid
- Antony Edwards, CTO and Gareth Smith, V.P. Products and Solutions, TestPlant
- Alex Henthorn-Iwane, V.P. Product Marketing, ThousandEyes
- Tim Koopmans, Flood IO Co-founder & Flood Product Owner, Tricentis
- Tim Van Ash, S.V.P. Products, Virtual Instruments
- Deepa Guna, Senior QA Architect, xMatters