Report: How The World’s Top Organizations Test
Learn how the world’s largest companies test the software that their business (and the world) relies on.
Join the DZone community and get the full member experience.Join For Free
With every business becoming a digital enterprise, the ability to rapidly deliver reliable applications is now a core strategic advantage. Are Fortune 500 and equivalent organizations prepared for the digital race ahead? The answer may lie in their software testing process, which can be either a curse or catalyst to speedy innovation.
Although there is no shortage of reports on overall software testing trends, the state of testing at the organizational level—particularly at “household name” brands — has historically been a black box. On the one hand, these large organizations often have access to resources far beyond the reach of smaller businesses (for example, commercial as well as open-source software, access to consultants and services, etc.). But on the other hand, they face daunting challenges such as:
- Complex application stacks that involve an average of 900 applications. Single transactions touch an average of 82 different technologies ranging from mainframes and legacy custom apps to microservices and cloud-native apps.
- Deeply-entrenched manual testing processes that were designed for waterfall delivery cadences and outsourced testing—not Agile, DevOps, and the drive towards “continuous everything.”
- Demands for extreme reliability. Per IDC, an hour of downtime in enterprise environments can cost from $500K to $1M. “Move fast and break things” is not an option in many industries.
How well are top organizations’ testing processes prepared for the digital race ahead? And what’s still required for them to achieve the “holy grail” of sustainable continuous testing within an automated DevOps pipeline?
Introducing the First Annual Enterprise Application Testing Benchmark
To shed light on how industry leaders test the software that their business (and the world) relies on, Tricentis has released the first annual How the World’s Top Organizations Test report. This data was collected through one-on-one interviews with senior quality managers and IT leaders representing multiple teams at the world’s top organizations—Fortune 500 (or global equivalent) and major government entities across the Americas, Europe, and Asia-Pacific.
We’re protecting everyone’s privacy here, but just imagine the companies you interact with as you drive, work, shop, eat and drink, manage your finances…and take some well-deserved vacations after all of that. Given the average team size and number of teams represented, we estimate that this report represents the activities of tens of thousands of individual testers at these leading organizations.
Here are some specific takeaways from the report:
Automation Without Stabilization
The average test automation rate (39%) is relatively high, but so are false positives (22%). This is common for early-stage test automation efforts that lack stabilizing practices like test data management and service virtualization.
Tests Aren’t Aligned to Risks
Requirements coverage (63%) is reasonably high, but risk coverage is low (25%). Likely, teams are dedicating the same level of testing resources to each requirement rather than focusing their efforts on the functionality that’s most critical to the business.
Dev and Test Cycles Are out Of Sync
The average test cycle time (23 days) is shockingly ill-suited for today’s fast-paced development cycles (87% of which were 2 weeks or less by 2018). With such lengthy test cycles, testing inevitably lags behind development.
Quality Is High (Among Some)
The reported defect leakage rate (3.75%) is quite impressive (Typically, <10% is considered acceptable, <5% is good, and <1% is exceptional). However, only about 10% of respondents tracked defect leakage, so the overall rate is likely higher. The organizations tracking this metric tend to be those with more mature processes.
Organizations have made good strides mastering the foundational elements of testing success (adopting appropriate roles, establishing test environments, fostering a collaborative culture).
“Continuous Everything” Isn’t Happening… Yet
Few are achieving >75% test automation rates or adopting stabilizing practices like service virtualization and test data management. Given that, limited CI/CD integration isn’t surprising. All are high on organizations’ priority lists though.
The greatest gaps between leaders and laggards are in the areas of the percentage of automated tests executed each day, risk coverage, defect leakage into UAT, and test cycle time.
Top Improvement Targets
The areas where organizations hope to make the greatest short-term improvements (within 6 months) are risk coverage, defect leakage into UAT, false-positive rate, and test cycle time.
About the Report
The complete 18-page benchmark report is available now. It covers:
- How leaders and laggards differ on key software testing and quality metrics.
- Where most organizations stand in terms of CI/CD integration, test environment strategy, and other key process elements.
- What test design, automation, management, and reporting approaches are trending now.
- Organizations’ top priorities for improving their testing in 2021.
Here’s a teaser of some of the data points you’ll find:
Throughout the year, we’ll continue sharing different analyses of our data set. Next up: reports on trends by industry, technology, and region — as well as interesting correlations across metrics, practices, and delivery methods.
In 2022, you can expect the second edition of the report, which will include year-over-year trends as well as the latest results.
Published at DZone with permission of , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.