Software Test Automation and AI
Software Test Automation and AI
Let's take a look at software test automation and artificial intelligence as well as explore how to push the bounds of what we can automate in software testing.
Join the DZone community and get the full member experience.Join For Free
Pushing the Bounds of What We Can Automate in Software Testing
We have this funny little tagline about how we're pushing the boundaries of test automation. It's a simple enough thing when you say it, but what do we really mean by it?
Recently, we were recognized by several industry analysts for the work we've been doing pushing those boundaries. At voke, they said, "Parasoft is a company borne of innovation with a relentless focus on software quality," and Forrester said, " Regarding AI, Parasoft has an impressive and concrete roadmap to increase test automation from design to execution, pushing autonomous testing."
But what does this mean in a practical way? Cool, analysts are talking about our innovative roadmap, but how does someone actually benefit from all of this innovation?
The "Boundaries" of "Test Automation"
Let's start with what we mean by "test automation." Usually, when people talk about test automation, they're referring to test execution, where you take something like a manual test, record it somehow, play it back, and put that into a test script that a tool can run on-demand (i.e. record and replay testing, which is cool and all, but it's definitely within the "bounds"). Other companies deliver automation on the test creation side — helping automate the creation of the test script in the first place.
And while those two techniques are interesting and useful, they're definitely well within the realms of normal test automation efforts today.
When we look at how to push the bounds, we're not looking at a single test step — we're looking at the whole testing practice. We don't want to move a single process left, we want to shift EVERYTHING left. And yes, my title gave it away — a lot of this comes back to Artificial Intelligence. When we bring AI technology and test automation together, the testing capabilities become infinitely larger.
We're not just pushing the boundaries — we can't even see where the boundaries ARE.
Of course, I've promised to talk about what this means in a practical way. Not a grandiose way, where we don't need boundaries, or cars, or humans, or...
So let's get right to it.
Innovating in Software Testing by Addressing Key Pain Points
So what exactly have we innovated and how can you benefit from it? The recent Forrester report noted all of our reference customers achieving test automation of over 50%, which is well over double the industry average. To achieve this — to really push the bounds of test automation in a practical way that increases your level of test automation — we identified three key challenges of software testing and created technologies to solve them. These are all available for you to use today.
1. Managing (and understanding) the impact of change
Testing organizations spend an inordinate amount of time maintaining their test cases as the application changes. Change comes in many different forms, including new functionality being introduced into the application, or a re-architecture of existing application functionality. Either way, the testing team needs to identify what has changed, map those changes to their current testing strategy to discover the delta, and devise a strategy for building new test cases to cover the gaps while updating existing test cases to keep working.
Sounds simple, perhaps. And it can be if your application has a small number of APIs. But this is a very difficult process in reality because most testing teams do not have intimate knowledge of an application's interfaces to begin with. When the change occurs, there's inevitably a rapid scramble to identify what the new capabilities are, and build meaningful test scenarios against it. Understanding the rippling impact of changes in the system is essentially impossible for a human.
So we bring in the bots. Parasoft's Change Advisor identifies changes in application interfaces and provides a handy change template that maps different versions of the service to each other. It can then be used to bulk re-factor any test cases that might be affected by this change.
With this technology, teams can reduce maintenance costs associated with test cases and virtual services, while giving testers the comfort in knowing that the artifacts that they are creating today are still gonna work tomorrow. And with this change mitigation workflow, we enable much wider test automation adoption and scaling.
2. Getting adequate knowledge about how the APIs work
"Am I building my API tests right?"
This is a question I hear all the time when working with customers to build an API testing strategy. It's basically the same as asking, "How long is a piece of string?"
The only way to know if you're really building the "right" API test is if you know that you really understand the API. Testers can be creative if they have knowledge. Once you know how a thing works, you can build all sorts of meaningful tests against it — but getting that initial knowledge isn't easy. How is one tester supposed to understand how all of the APIs work? I guess you could sit down with all of the developers in your spare time and ask them about the subtle intricacies of the application interfaces they've created. What are they, what are they for, how are they used, how would you test them?? The questions are endless, and the problem, of course, is that you're not going to do that, and neither are they.
To tackle this challenge, we built the Parasoft SOAtest Smart API Test Generator. Packaged up into a simple plugin for Chrome, it uses AI technology to help close the knowledge gap associated with discovering relevant API calls and understanding the associations between API calls to create a meaningful sequence of steps required for testing.
With the Smart API Test Generator, you can monitor and extract API calls from an application to a REST backend API. The artificial intelligence looks for patterns and relationships in the data, forms an API test scenario that models the interaction between the front end and the APIs, and helps organizations understand not only what APIs are available, but ultimately how to test those APIs using realistic scenarios.
3. Effectively managing (and not wasting) time
If you are able to overcome the knowledge gap and still somehow build a giant library of meaningful test scenarios, you still need to know when to execute those tests. Testers are under immense pressure to achieve adequate test coverage given their available time, but you can't simply run all of your tests all the time. You need a reliable method to run the exact tests that you need, given the change that has occurred.
Of course, everything boils down to time. Time to deal with the change, time to learn the APIs. Have you set up the most effective use of your test automation given the time that you have for execution? This is a big one. I was recently working with an organization that had amassed 18,000 tests. While this sounds great on the surface, they didn't have enough time in the evening to execute them all. Additionally, if all of them did execute, chances were some of them would fail, and it wasn't immediately apparent how to cut through the noise and understand exactly what had taken place. Test automation can help you overcome a lot of challenges, but you can accidentally add a whole lot of testing time, instead of reducing it, if you're not effectively managing change.
And here is where all of these challenges sort of come together. To achieve quality at speed, you need to make testing more efficient, which is all about better understanding change — it's not about creating more tests, it's about creating fewer tests with higher coverage. So how do we increase our knowledge about what tests to create and run, to save time and accelerate delivery?
Here, it starts in the developer's IDE. We've been innovating quite a bit here as well, and now, developers can use Parasoft's Change-Based Testing technology to automatically identify and run only the tests that are affected by code changes. The technology goes beyond ordinary Change-Based Testing to enable development organizations to take a surgical approach to test execution, rather than having to execute every single test every time there's a check-in. The technology understands how different areas of the code affect each other and then uses this information to execute only the tests that are affected by code change from a given check-in.
But I promised to push the bounds!! Of course. This is happening in a real way here. Because our technology understands the specific impact across the application and can use this information to run the right test cases against the right code, all from the IDE, before the code is even integrated into the application, we are helping developers understand (at a local level, before they commit) what the impact of their changes will be on the broader application. This significantly reduces the amount of time that the organization spends reacting to application failure and increases the focus on the reliability and stability of the entire application.
Innovation as a Catalyst for Action
These three big, maybe-even-nebulous challenges certainly can be tackled from up and down and all around, but we are finding, more and more, that if you leverage the machines to work for you, focusing on achieving high levels of test automation in as many ways as you possibly can, you can achieve complete, end-to-end application testing. (For some real examples of our customers that are using these technologies to do just that, you can check out a recent webinar I delivered with our Director of Development that's available to watch here: The Future of Test Automation: Next-Generation Technologies to Use Today).
Now. If you scrolled all the way down to the bottom after reading the title, and just want to understand the takeaway (Test Automation + AI = ??), let me try and give you a good little nugget. To push the bounds of automation in software testing, you can leverage artificial intelligence in many places, including understanding how APIs work, building effective API test scenarios, identifying changes that have occurred, and executing the right tests at the right time. And then, you can apply Machine Learning to take it all to the next level. But that's the next topic, for the next rainy day.
To start, leverage all of this test automation innovation to tackle the specific software testing challenges you're facing at your organization. By taking advantage of these next-generation technologies, you can focus on change as a catalyst for action, as opposed to worrying about change threatening your innovation.
Published at DZone with permission of Chris Colosimo , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.