AI in Software Testing: The Hype, the Facts, the Potential
AI in software testing shows great promise in ensuring high-quality software without the manual drudgery of the endless unit and integrated proofing.
Join the DZone community and get the full member experience.
Join For FreeArtificial Intelligence (AI) in software testing shows great promise in ensuring high-quality software without the manual drudgery of endless units and integrated proofing. With AI, delivery times will be reduced from minutes to seconds, and vendors and customers will experience a software renaissance of inexpensive and user-friendly computer applications. Unfortunately, the luxury of inexpensive storage space, blazing fast processing rates, readily available AI training sets, and the internet have converged to turn this promise into overblown hype.
Googling "AI in software testing" reveals an assortment of magical solutions promised to potential buyers. Many solutions offer to reduce the manual labor involved in software testing, increase quality, and reduce costs. In addition, vendors promise that their AI solutions will solve software testing problems. The Holy Grail of software testing — the magical thinking goes — is to take human beings, with their mistakes and oversights, out of the software development loop and make the testing cycle shorter, more effective, and less cumbersome. The question is - should that be the main focus, and is it even possible?
The Reality
Taking humans out of the software development process is far more complex and daunting in the real world. Regardless of using Waterfall, Rapid Application Development, DevOps, Agile, and other methodologies, people remain central to software development since they define the boundaries and the potential of the software they create. In software testing, the "goalposts" are always shifting as business requirements are often unclear and constantly changing. User demands usability change, and even developer expectations for what is possible from the software can shift.
The initial standards and methodologies for software testing (including the term quality assurance) come from the world of manufacturing product testing. Within this context, products are well-defined, with testing far more mechanistic compared to software, whose traits are malleable and often changing. Software testing is not applicable to such uniform, robotic methods of assuring quality. In modern software development, many things can't be known by developers. For example, user experience (UX) expectations may have changed since the first iteration of the software. Or there are higher expectations of faster screen load times or speedier scrolling needs, or users no longer want lengthy scrolling down a screen because it is no longer in vogue.
Whatever the reason, AI can never on its own anticipate or test for what its creators could not envision, so there can be no truly autonomous AI in software testing. Creating software testing "Terminator" may pique the interest of the media and prospective buyers, but this deployment is a mirage. Instead, software testing autonomy makes more sense within the context of AI working in tandem with humans.
AI Stages
Software testing AI development essentially has three stages of development maturity:
- Operational
- Process
- Systemic
The overwhelming majority of current AI-enabled software testing is at the Operational stage. At its most basic, Operational testing involves creating scripts that mimic the routines human testers perform themselves hundreds of times. The "AI" in this instance is far from intelligent and may help with items like shortening script creation, repeated executions, and storing results.
Process AI is a more mature version of Operational AI with testers using Process AI for test generation. Other uses may include test coverage analysis and recommendations, defect root cause analysis and effort estimations, and test environment optimization. Process AI can also facilitate synthetic data creation based on patterns and usages.
Process AI can also provide an additional set of "eyes" and resources to offset some of the risks that testers take on when they are setting up the test execution strategy. In actual application, Process AI can help make testing easier after code has been modified.
Manual testing often sees testers retesting the entire application, looking out for unintended consequences of a code change. On the other hand, Process AI can recommend a test to a single unit (or limited impact area) instead of a wholesale retest of the entire application. At this level of AI, we find clear advantages in development time and cost. Unfortunately, in Systemic AI's third stage, the future can become a slippery slope of unrequited promises.
Systemic AI
One of the reasons systemic — or fully autonomous — AI testing is not possible (at least for now) is because of the enormous amounts of training the AI would require. Testers can be confident that Process AI will suggest a single unit test to adequately assure software quality. With Systemic AI, however, testers cannot know with high confidence that the software will meet all requirements.
If AI at this level was truly autonomous, it would have to test for all conceivable requirements - even those that have not been imagined by humans. They would need to then review the autonomous AI's assumptions and conclusions. It would take a great deal of time and effort to verify these to provide a high level of confidence that the AI was accurate in its assumptions. Autonomous software testing can never be fully realized because humans wouldn't trust it, which would defeat the purpose of working toward full autonomy in the first place.
Training AI
Though fully autonomous AI is a myth, AI that supports and extends human efforts at software quality is a worthwhile pursuit. In this context, humans can bolster AI: testers must consistently monitor, correct, and teach the AI with ever-evolving learning sets. The challenge is to train the AI while assigning risks to various bugs within the tested software. This training must be an ongoing effort, in the same way, autonomous car makers train AI to make the distinction between a person crossing a street and a bicycle rider.
Testers must train software testing AI with past data to build their confidence in the AI's capabilities. Truly autonomous AI in testing needs to project future conditions — developer-induced and user-induced — which it cannot do based on historical data. Instead, trainers train AI based on data sets according to the trainers' own biases. The biases put limits on the possibilities that AI can explore the same way blinders keep a horse from wandering off an established path. Increasingly biased AI becomes increasingly untrustworthy. Confidence becomes low that the AI is performing as expected. The best the AI can be trained to do is deal with risk probabilities and arrive at risk mitigation strategies ultimately assessed by humans.
Risk Mitigation
Ultimately, software testing is about managing testers' confidence. They weigh the probable outcomes of initial implementations and changes to code that could cause problems for developers and users alike. Unfortunately, confidence can never be 100% that software testing has fully explored every possibility of an application breaking down. Whether manually performed by humans or autonomously, there is an element of risk in all software testing.
Testers must decide the test coverage based on the probability of the code causing problems. They must also use risk analysis to decide what areas to focus on outside the coverage area. Even if AI determines and displays relative probabilities of software failure at any point in the chains of user activity, a human still needs to confirm the calculation. AI offers possibilities for software continuity that are influenced by historical biases. However, humans would still not have a high confidence level in the AI's risk assessment and prescriptions to mitigate risk.
AI-enabled software testing tools should be practical and effective to produce realistic results for testers while alleviating the testers' manual labor. The most exciting — and potentially disruptive — deployment of AI in software testing is at the second level of AI development maturity: Process AI. As a Katalon researcher noted, "the biggest practical usage of AI applied for software testing is at that process level, the first stage of autonomous test creation. That would be when I can create automated tests that can be applied by and for me." Autonomous and self-directed AI that replaces all human involvement in the software testing process is hype. It is far more realistic and desirable to expect that AI can extend and supplement human efforts and shorten test times. It's also in the not-too-distant future.
Opinions expressed by DZone contributors are their own.
Comments