AI in Test Automation

DZone 's Guide to

AI in Test Automation

In this article, explore the opportunities of applying artificial intelligence (AI) to test automation.

· AI Zone ·
Free Resource

This article is featured in the new DZone Guide to Automated Testing: Your End-to-end Ecosystem. Get your free copy for more insightful articles, industry statistics, and more!

Today, almost all IT projects are faced with the challenge of operationalizing and deploying software and services with greater speed and accuracy, creating an unrelenting, high-pressure environment for the project team. Requirements shift daily and there are never enough engineers to make it all happen perfectly. A major part of the burden on project teams is the need for continuous testing. In this article, I will explore the opportunities I've discovered by applying artificial intelligence (AI) to test automation. AI is meant to make businesses far more capable and efficient. The best companies are using AI to enhance customer and client interactions, not eliminate them. Big data collection and algorithmic advances are vastly extending the scope of testing automation, making it possible for non-technical team members to define and scale tests with levels of capability and sophistication comparable to or even greater than developers'. In short, AI is transforming all facets of test automation by streamlining creation, execution, and maintenance, and providing businesses with actionable insights in real-time that directly affect the bottom line.


More than 12 years ago, I launched a consulting business serving both startup and enterprise clients alike. As I served my clients and looked to streamline the time between committing a change to an application and the change being deployed into live production, I found that the need to ensure quality and reliability demanded greater and greater share of resources. There is an ever-increasing variety of combinations of innovation, application components, and protocols that interact within a single event or transaction. Over time, I realized that there was a need for something more. Then, in the fall of 2013, I saw how AI could shape the testing landscape, and I wrote the first line of code for Functionize.

The Origins and Limitations of Test Automation

Test automation is not new. The advent of Selenium in 2004 was a major advancement in empowering developers to take further control of QA. However, the challenges of Selenium and popular record/replay frameworks became readily apparent to developers who sought to use the recorder in complex environments, as the selectors used for identifying elements had to be continuously updated with every code change.

Test Creation Guided by Market Demand

Automated test creation has often been limited to three methodologies: manual, scripted languages (some with greater degrees of modularity than others), and record/replay tools. Each on their own offers value, but they are all constrained in conspicuous ways:

  1. Manual testing is slow and not designed for today's CI/CD pipelines because it can’t scale with complex applications.
  2. Scripting is laborious, error-prone, and expensive, as engineering resources are usually required.
  3. Record/replay tools struggle to capture sophisticated user workflows, and editing those workflows often requires having to re-record everything.

As I began listening to the market and our customers, it became clear that different options for test creation were desirable, but often not presented within the same tool. With Functionize, I sought to offer both conventional and new modes of test creation, all enhanced by AI:

  • Simply writing a user journey in plain English or submitting a sequenced set of desired tests to our NLP engine, which uses AIto analyze and model the data.
  • Training our AI modeler that learns the application.
  • Using our Developer Mode that enables robot-compatible scripting and automatically builds smart Page Object libraries that are modular and portable.
  • Fully autonomous test creation that analyzes and generates test cases from live user data.

How AI Impacts Test Creation

There is a lot of noise in the market around AI in test automation. Below are a handful of examples that serve as a litmus test for judging the degree to which AI is present in test creation.

  • Machine vision that automatically locates and identifies hundreds of selectors. This requires a much broader focus and ingest than just HTML and CSS.
  • AI and machine learning to continually scan and analyze theDOM and application states for meaningful information, rejecting noise and irrelevancies.
  • Page object recognition that happens continually and autonomously, increasing test modularity and scalability.
  • Fully autonomous test creation utilizing AI technologies via natural language processing and advanced modeling.

However, even test automation frameworks that go beyond traditional scripting methods and employ an image- or visual-based approach remain constrained. Test creation remains time-consuming, as the tester must manually select and drag the desired element for interaction. There also remains a high degree of selector maintenance due to a pixel/image approach to object recognition. Market leaders are struggling to integrate AI into their automation stack, and the result is confusing jargon that mis-defines AI as Awesome Integrations, not Artificial Intelligence.

How AI Impacts Test Execution

The dearth of true cloud scale test execution options reveals there is ample room for AI to drive new productivity. On-premise and even cloud technologies like Selenium Grid are still hampered by execution time, based on the number of nodes running, memory, and the number of concurrent tests. The whole purpose of cloud computing is the ability to perform rapid scaling of applications, up and down, dependent on the workload, with information shared across all execution instances. As testers look for solutions to execute their tests at scale, the bar should be set very high if AI is claimed to be augmenting those processes. We set ourselves the following acceptance criteria:

  • Tests should be executable at scale, in the cloud, so they become more efficient and reliable with every subsequent run and release.
  • Tests should be executable from anywhere around the globe, from any device, with any bandwidth, and in all types of environments.
  • Even the most complex tests should take minutes to execute — not hours, let alone days.

How AI Impacts Maintenance

Rapid test creation is only as viable as the resiliency of executed tests. The most efficient way to ensure that test maintenance is not the bottleneck in a deployment pipeline is to identify what is actually happening to the data during test creation. The failure point of test maintenance ultimately resolves to inadequate data modeling during creation. AI can help here:

  • Self-maintenance: Resultant tests are modeled and thus maintained by the combination of an exhaustive and autonomous set of data points, such as the size of element, location on a page, previously known size and location, visual configuration, XPath, CSS selector, and parent/child elements.

  • Self-healing tests: Root cause analysis highlights all potential causes for test failure and provides a path for one-click updates.
  • Data modeling: Selector maintenance should be eliminated by having elements identified by hundreds of data points that are rated and ranked instead of a single selector.
  • Computer vision diagnosis: AI means visual diagnosis is easy: identifying broken tests should take seconds in a visual environment and shouldn't require digging through scripts. Applied AI methods such as these ensure that your testing frameworks learn the structure and organization that is unique to your application, minimizing human intervention.

How AI-Powered Testing Automation Transforms Businesses

Businesses that have a commitment to implementing AI at the enterprise level are already experiencing greater operational efficiency and better product results. Developers are renegotiating their involvement within Agile and DevOps strategies, as smart algorithms are now capable of tackling the most repetitive problems presented in testing automation. Not only is product development significantly streamlined when testing automation changes from the bottleneck to the catalyst within a CI/CD pipeline, but also, executives are provided with business intelligence previously unavailable that directly impacts the bottom line. Functionize is partnering with Google Cloud to build advanced anomaly detection through canary testing where a small set of users are used for real-world testing of new code. AI is used to compare the experience of these users with those running the existing code. Anomalies can then be identified automatically, and details passed back to the developers.

This article is featured in the new DZone Guide to Automated Testing: Your End-to-end Ecosystem. Get your free copy for more insightful articles, industry statistics, and more!

artificial intelligence, data modeling, machine learning, self-healing systems, test automation

Published at DZone with permission of Tamas Cser . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}