Solving the Top 10 Test Automation Challenges
To accelerate software releases you need to automate UI testing. Learn how to handle these top 10 challenges in test automation.
Join the DZone community and get the full member experience.Join For Free
Web Applications’ dynamic, rapidly-changing, and business-critical nature pushes the limits of traditional test automation tools and open source frameworks.
Changes to your application require changes to your tests. Not just for the new features but also for your regression tests.
At Testim, we speak with developers and automation engineers every day who share the challenges they face in keeping up with the demand for UI testing. This post discusses ten key challenges we frequently hear from them and how a more modern approach help solves them either simply or automatically.
Admittedly, the list is a bit of a mix of broader, more conceptual challenges, with some relatively specific ones that stem from an application's behavior or the limitations in the framework they currently employ.
1: Building Test Coverage
Test coverage is the extent of the application’s features or functionality verified through testing. When people tell us they have a challenge building test coverage, they often mean that they don’t have sufficient resources to write tests fast enough to keep up with the growth in the application.
Attempts to solve the challenge often involve hiring more people or forgoing testing—shifting the problem out a few months.
Low code tools can help speed authoring by minimizing the complexity and removing the bottleneck on specific skills. Capturing test cases is significantly more accessible, more accurate, and stable than in early iterations of “record and playback.” AI-powered tools help model the application under test, understand the relationships between DOM elements and use multiple attributes to improve stability. By speeding up authoring, they help many agile teams catch up coverage.
2: Identifying Dynamic Elements
Traditional test automation frameworks identify visual elements in the application by CSS property or its location on the page. When the location or attributes change through normal development activities, they typically break the related UI tests. Fixing or updating broken tests is often referred to as maintenance. Prospects tell us maintenance on traditional open source frameworks can consume up to 40% of a team’s quality resources.
Several methods for identifying dynamic elements include using backup locators or computer vision (comparing images). Backup locators are only marginally better than single locators and require manual adjustment. Computer vision can help with regression testing of relatively static applications but lacks stability for new or dynamic features.
The most stable tests come from AI-powered tools that deeply inspect and understand the application’s elements, attributes, and the relationships between elements. Even better if the application learns from test runs and adjusts to reflect application changes over time.
3: Synchronizing Tests With the AUT
When the test executes in the automation platform, the timing of test steps has to match the timing of the application, or the test won’t find the correct elements. Techniques for keeping the test and application in sync include adding time-based waits that can be per step or test and event-based waits that prevent moving to the next step until an event occurs (e.g., element or text is/is not visible). Further, you can add conditions on waits to make them more flexible.
However, adding waits slows down execution times. The key is to add as few waits as possible to achieve the desired stability while minimizing the impact on speed. There’s a second and related tradeoff: time spent tweaking the test versus acquiring the optimal mix of stability and speed.
Some companies are experimenting with using computer vision to identify when the page is ready for the next step to handle these tradeoffs. However, until that technology matures, you will want options for using different time-based, event-based and conditional waits.
4: Troubleshooting Failures
When test failures occur, and they will, you’ll want to diagnose them quickly. The tools should make it easy for anyone on the team to understand why a test failed. Even better, you’ll want tools to help you prioritize your work and point to recurring errors that might have impacted multiple tests.
Look for tools that provide before/after screenshots at every test step without requiring extra coding. Videos can be helpful but are slower to load and often aren’t as quick to pinpoint what went wrong. Network and console logs can be beneficial for additional diagnosis but should be automatically included in the test results rather than a separate task to perform.
Advanced tools don’t just tell you where it broke—they tell you why it broke. Modern tools can also help you triage your work by aggregating common errors and showing the test’s recent results history.
5: Customizing Codeless Tests with Code
This one doesn’t apply to coded test frameworks because, hey, they’re already in code.
There are many low-code or codeless test automation tools in the market that simplify UI test authoring by using model-based or record/playback approaches to authoring tests. Some of them severely limit what you can do with the test to customize it to fit your application.
Adding code to a codeless test should be a must-have capability to ensure you have the flexibility to meet unique use cases. Make sure that adding code is in a language that your team knows and can support. Consider whether the code will run in the browser or the server (i.e., node.js)—both can be useful. And make sure you can add code to a shared step and embed it in a group or module and use it across tests.
6: Running Cross-Browser Tests in Parallel
There are plenty of articles on the importance of cross-browser testing, yet many development teams only focus on Chrome. Why? Partly because it’s costly to stand up and maintain a cross-browser testing grid.
If your application is accessible by different browsers, you should perform at least some cross-browser tests to ensure it’s functioning correctly. There are two options for making this easier: go with a tool that has a built-in cross-browser testing grid or integrate your testing with a device farm or virtual testing grid service. The former tends to be simpler and cheaper, while the latter will give you a broader array of devices and browser-type configurations.
7: Handling Random Pop-up Windows
Pop-ups are one of those niche challenges that may or may not apply, depending on your application.
Pop-ups can benefit your application, but they aren’t always predictable and can block your test from continuing to the next step.
Many tools require you to know where the pop-up occurs, switch to the active window, close it, and then switch back to the application’s main window. While these can be helpful on expected warning pop-ups, they don’t help with random pop-ups from integrated tools like chatbots or time-based promos that can block elements until closed. For those, you’ll need to look for solutions that search for pop-ups before each step and then handle them by closing/canceling.
8: Maximizing Reuse of Test Components
Don’t Repeat Yourself (DRY) is a coding concept that also applies to testing. If your tests contain steps that are frequently repeated across other steps, a change to the underlying element means many tests need to be updated. If instead, those steps or groups were shared and reused across tests, you could update it once to fix all of the dependent tests.
At the same time, to encourage reuse, the people writing tests need quick and easy access to those reusable components, or they won’t get used. In addition, the reusable components should be flexible enough (like code) to allow some modification in specific tests, whether through parameterization, special handling, etc.
Look for a tool that makes it easy to create and share reusable components. Make sure that it’s also easy to find and add those components to the test, whether during the authoring process or in subsequent editing steps. Finally, even if it’s a low-code test platform, it should enable some form of test refactoring to clean up duplications and replace them with reusable components.
9: Reporting and Understanding Results
Pass/fail reporting may be sufficient to understand a handful of tests. Yet, its value diminishes as you add more tests, testing types (smoke, regression, etc.), and users evaluating the results. Larger projects demand more sophisticated reporting that helps illustrate the overall state and direction of quality so that you can take action before your managers start implementing “quality catch-up initiatives.”
Look for built-in reporting that’s easy to run and share across the team on a frequent and recurring basis, such as weekly. Look for flexibility through filtering and sorting to create different views that work for your stakeholders. The reporting should enable drill-down into the details to answer ad-hoc questions and help you gain insights that can lead to ongoing improvements.
10: Scaling Test Automation Efficiently
If you don’t realize it yet, scaling your test automation project and the team raises new issues like effective test organization, change control, and ownership. Scaling is especially challenging for agile teams with different roles in quality assurance, either part- or full-time.
Consider tools that help you organize the tests on different dimensions, such as by feature or user type and by type of test (e.g., smoke, sanity, regression). Do the tools help you manage the lifecycle of the test from draft to active to quarantine. You don’t want a test you are evaluating to fail the CI build and slow the release train, but you still want to learn from it and make adjustments, so it is production-ready.
Also, consider how different roles contribute to your tests. Do you have manual testers, developers, automation engineers, and other roles as part of your QA process? If so, maybe they all shouldn’t have access to change tests willy-nilly. Enforcing change reviews by an SME or approvals through a pull request can be best practices to help drive higher standards.
The good news is that there are solutions to most of these challenges. Some may require the adoption of a new tool or a change in the process. Modern test automation tools are rapidly evolving and simplifying the authoring, maintenance, and management of test automation projects so your team can focus on what you want to do—build innovative applications.
Opinions expressed by DZone contributors are their own.