Quality @ Speed: Meet COVID-19
By the end of 2019, quality at speed had graduated from buzzword to business as usual at most organizations. But then came COVID-19.
Join the DZone community and get the full member experience.
Join For Free[This article was written by Wolfgang Platz.]
Think back to 2015. The U.S. economy was in the early phases of a growth spurt that would continue for years. Ninety-five percent of organizations were practicing Agile, to some extent (State of Agile, VersionOne). And the proportion of the IT budget allocated to QA and testing had just increased from 18% to 26% (World Quality Report, Capgemini).
“Quality at speed” became the catchphrase of the year in the testing world. With all the buzz around DevOps and Agile, it was clear that speed mattered. In reality, most teams weren’t quite there yet. Agile adoption was still low (less than 50% of teams within each organization actually practiced it) and maturity was even lower (only 17% of those practicing it reported mature adoptions). Still, the writing on the wall was clear. Leaders wanted more application development, faster — so testers should at least prepare for the pending storm.
From that point forward, organizations across almost every industry all progressed to a sweet spot where time-to-market pressure was relatively high and cost pressure was moderate. By the end of 2019, quality at speed had graduated from buzzword to business as usual at most organizations.
But then came COVID-19. Now, we’re in the midst of a global pandemic. Every business, every non-profit, and every governmental agency is impacted — but in different ways.
Speed expectations suddenly shifted into overdrive for an unlikely assortment of organizations, including government agencies and non-profits, healthcare and pharmaceutical companies, online retailers, online conferencing providers, and telecoms. Yes, everyone had already been working to accelerate their software delivery processes. And many enterprise organizations had made great strides toward scaling the adoption of Agile and DevOps. But very few of these organizations were previously delivering applications at the extreme speeds required right now.
For another group, cost reduction suddenly became the top priority. Airlines, hotels, as well as other travel and leisure organizations, experienced such dramatic changes in fortune that speed became a non-issue. The focus shifted to increasing cost efficiency in an effort to offset some of these staggering losses.
Assuming that high quality cannot be compromised, this leaves both of these groups with distinctly different — yet equally daunting — challenges from a quality perspective.
The organizations in the first group suddenly needed to build, test, and release applications at the speed of a startup — but with the added burden of legacy systems, complex application landscapes, and intense regulatory scrutiny. Quality at speed remains the goal, but the “speed” part is much, much faster.
The organizations in the second group need to determine how to continue meeting strict quality expectations with a significantly slimmer testing budget. Quality at speed isn’t so important right now. Speed gets replaced by the other corner of the classic cost-quality-speed triangle, making the new mantra “quality at cost.”
It’s hard to find good news in the current situation. But, from the software testing perspective at least, there’s some consolation. Whether you’re facing unprecedented time-to-market pressure or extreme cost pressure, the same core “test efficiency” strategies will help you either way.
Here’s a quick overview of ways you can test more efficiently at different steps of the software testing process.
Selecting and Designing Test Cases
I’m addressing these two steps together because they both face a common challenge: intuition. Most modern business systems are insanely complex — so complex that it’s just not feasible for most testers to intuitively identify the best set of tests to cover a specific test need. Having a more systematic way to find the smallest — yet most powerful — set of tests will save you both time and money.
For designing test cases for progression testing, methodologies like linear expansion and pairwise testing can help and augmented intelligence can dramatically simplify them for you.
For selecting what test cases should be run for regression testing, consider change impact analysis. Some approaches identify which objects are impacted by a given change, then help you narrow the test scope to the impacted objects. This often reduces the testing scope by 15%. Not bad, but you can do much better.
AI-powered change impact analysis pinpoints exactly what needs to be tested in a given update, based on whether the changes put an object at risk. With this approach, you can reduce testing efforts by 85% or more. Not only is this approach faster and cheaper, but it’s also much more efficient at uncovering defects. The number of defects that hit production is reduced by 75% to 95%, sometimes even 100% (no defects in production).
Automation and Execution
Here, technical scripting is the top time and cost drain. Some level of maintenance is required to keep scripts running, and this maintenance will cut into your test efficiency. One way to combat this challenge is to take a model-based approach to test automation. Such approaches have been field-tested for years and proven to solve the test maintenance nightmare that’s doomed all-too-many test automation initiatives.
The new kid on the block is AI-driven UI test automation. This is a big topic that warrants a much deeper discussion — but here’s a quick overview. Specify tests in natural language. Build the tests even before the actual UI is ready. Use the same tests as the technical implementation changes. Understand dynamic behavior like a human tester would. Test absolutely any technology — faster than a human eye. Curious? Stay tuned.
Analysis of Results
When it comes to results analysis, false positives are the vampire sucking your resources. False positives are tests that fail even though the application is behaving correctly. They will really kill you in terms of cost and speed. First, someone needs to figure out why the test failed. This might involve manually checking the related application functionality, inspecting the test automation, reviewing the test data, checking whether every element of the test environment is up and working as expected — maybe even all of the above. Once the exact problem is found, it needs to be fixed. Then, the test needs to be rerun. If it still fails, rinse, and repeat.
How do you stop false positives? To start, stabilize the automation with model-based test automation or the AI-driven testing I mentioned in the previous section. As you move into more advanced automation scenarios, stateful test data management, and service virtualization can help you stabilize tests to the point that they’re fit for CI/CD. Then, your final challenge is to deal with the flaky tests resulting from things like asynchronous application behavior, concurrency issues, and test order dependencies. Again, AI and ML can offer tremendous assistance here — in flagging potential false positives, and even in helping you identify the cause and optimal resolution.
Final Thoughts
COVID-19 split up the original quality at speed sweet spot into the extreme speed camp and the extreme cost-savings camp. In both cases, the same core strategies to increase your test efficiency will help you increase speed as well as reduce testing costs. If you’re under extreme time pressure, showing that you also reduced costs while meeting speed expectations certainly won’t hurt. If you’re currently reducing costs because your industry is essentially paused, taking these steps now will prepare you to speed it back up when the world becomes more normal again.
Opinions expressed by DZone contributors are their own.
Trending
-
How To Approach Java, Databases, and SQL [Video]
-
How To Use Pandas and Matplotlib To Perform EDA In Python
-
Operator Overloading in Java
-
Mastering Time Series Analysis: Techniques, Models, and Strategies
Comments