{{announcement.body}}
{{announcement.title}}

Why Automated Test Projects Fail

DZone 's Guide to

Why Automated Test Projects Fail

Here are reasons, based on my experience, why such projects sometimes fail with suggestions on what to do to avoid failing.

· Performance Zone ·
Free Resource
The points discussed here do not generally apply to tech companies or companies that have strong technical leadership. These problems show up where there may be a lack of familiarity with automated testing by leadership. Also, I am referring to end-to-end automated tests here rather than unit testing.

You may also like: Getting Started With Automated Testing

Automated end-to-end tests aim to replace the work manual testers would perform with automation via the front-end as well as programmatic testing of back-end APIs and performance testing. Not everything manual testing does can be automated, it’s hard to automate aspects of UX and usability testing for example, but most repetitive test takes can be automated. Based on my experience most tests can be automated, including complex functionality related tests.

Automating testing saves time and allows more testing to get done than otherwise would. The time interval between release candidates can be shorter than the time it would take to feasibly manually test a build comprehensively. The reasons why automation is desirable is understandable but it’s not unusual to see attempts to implement and maintain such a project fail to live up to the initial promise.

Here are reasons, based on my experience, why such projects sometimes fail with suggestions on what to do to avoid failing.

1. Bringing in the Wrong People for the Job

Automated tests are programs. Automation of tests involves writing software and this needs software developers. The same way assembly line automation requires robots and robotics engineers. This may seem obvious but I have discovered occasions where the people brought into develop automated tests are not software developers.

Perhaps you’ve worked with a manager who suggested that perhaps a manual tester could automate the tests. I have never seen this work unless the manual tester also happens to be a great programmer. Part of this seems to be due to a misconception about what automated tests are, covered more later.

Another reason this happens is that there may be a belief that automated tests can be generated, either procedurally or using machine learning. The effectiveness of this is so far limited to boilerplate and simple test cases. To have a comprehensive test suite in place will require a handwritten code, you’ll need developers. Again, automated tests are software, the time when automated tests can be fully automated is when any other software, including feature code, can also be automatically written.

2. Lack of Code Quality, Maintainability and Extensibility

Automated tests measure correctness and quality, the test code to do this itself should also be high quality, maintainable and extensible to ensure lasting success.

This is often not the case. One reason is because of the wrong people for the job issue discussed above. Strong fundamentals in computer science and specific algorithms and data structures, and experience help a software developer be aware of how important it is to ensure the software they write meets these requirements.

Another reason quality could be low is due to corners being cut due to lack of time or to meet a deadline that is not being changed despite awareness that automated tests will take a longer time to develop than is available. This is a management issue. Adequate time should be factored into work sprints.

Another reason is not seeing the automated testing software as a long-term project, discussed next.

3. Not Seeing It as a Long-Term Software Project

Automated end-to-end tests will require updating and maintenance for as long as the product or service that they test. This may mean for the lifetime of the company if the application being tested is the product the company sells or is a service critical in supporting and enabling the business to run.

Yet sometimes short-term contractors are recruited to come into automating. What’s going to happen when they leave? Who will maintain the tests, who will update them on endpoint changes or in the case of UI tests each time the UI is redesigned or the flow changes, or whenever a new feature is added. Think of this as a long term project that will live side by side with the product under test.

4. Misunderstanding What Automation Is

This one hopefully doesn’t apply if a team is lead by someone with a technical background. Sometimes leadership is unfamiliar with what automated test development involves. They understand that building a software product takes a team of software developers, sprints, design, etc. but there is a misunderstanding about what developing automation means.

Some education can help here. Make it clear that automated tests are software, to be treated as a software development project, it’s not something that can be pulled in off the shelf. The development of the tests should begin as early as possible and in parallel to the feature and product development. It’s incorrect to assume product can be created, ready to be shipped and then suddenly automated tests can be made available, this is a fundamental misunderstanding of what is involved.

5. Giving up Too Fast About What Can Be Automated

Some things just cannot be automated, scanning QR codes, validating 2FA, verification emails, playing through multiple endings of a video game. All of this can be automated, I’ve either done it myself or seen it all done. Don’t give up too fast and give in to perceptions around what cannot be automated or what is not ‘worth’ automating.

Sometimes, automation can be surprisingly simpler to achieve with the right attitude than it may appear to be at first. As long as it’s a programmatic task of some sort, it can be automated. By ensuring these difficult looking cases get automated, the value your automated testing program brings increases massively.

6. Scope Too Limited

Successful automated testing projects have several parts. There are the test executioner and runner, sometimes called the testing harness, and too often that’s the limit of the scope of a project for automating tests. While the test harness is the heart of a test automation project the effectiveness of the ROI is limited if that’s all there is.

Some test jobs should integrate with the build system and continuous integration system. End-to-end tests are often long-running so running them on every commit or build on large teams may not always be practical but they should be set up to run on a schedule, daily for example. The ability to run ad hoc is always useful but a scheduled job on top of this ensures tests run regularly without intervention. For ad hoc runs consider exposing an easy way to trigger test runs.

When a test harness is being developed tests are usually run from the development machine. When running otherwise consider adding tooling to aid with this. For front-end tests on web and mobile you may need a device farm, either a local one your teams builds yourself or a cloud-based one. For back-end API tests, especially performance tests, you could deploy the test harness on instances in the cloud in different regions.

Tooling around how the output of testing will be consumed is critical and that is discussed next.

7. Lack of Visibility, Accountability, and Reporting

Once automated tests are running on a regular schedule or triggered by events or even run ad hoc, test results and related output data start getting produced. Thought should be put into how to collect, report and analyze this data. If the data is difficult to access, for example, requires checking logs on a remote server, or is a lack of visibility of test results and test data, understanding what’s going on with the automated tests becomes impossible to see effectively.

This mistake can jeopardize the entire automation project if not rectified quickly because without reporting the measurements of quality captured the automated tests may as well not exist. When done correctly, the great reporting acts as a hub not just for results but to view test case details, see often they run, see what’s failing and then handling the outcome of the testing to assign failing test cases and link bugs to take action on what has been discovered.

I started Tesults because I feel so strongly about getting this right. I had implemented something more basic across three teams across two companies and had seen the multiplying effect of great reporting to automated testing impact. Whatever you use, make sure your results data is visible, accessible and holds the team to account.

To end on a positive I’ll say that when a fully functional end-to-end automated testing is operational it’s a beautiful thing and delivers massive value to any team or company fortunate enough to have it. Issues are found and fixed earlier, releases happen faster and with the assurance that they are of high quality.

Reducing the risk of reach release takes a weight off dev the team and since more testing is taking place than was ever possible before, it’s possible to have greater experimentation especially with refactoring. Most importantly of all your customers are happier because they get faster updates without things being broken. Don’t make the mistakes addressed above, and good luck.


Further Reading

Your Guide to Automated Testing [Articles and Tutorials]

Starting Automation Testing From Scratch? Here Is What You Need to Know!

14 of the Best Automation Testing Tools Available Today

Topics:
software engineering ,software test automation ,software testing ,software developent ,project management ,automated testing ,engineering management ,quality assuarance ,quality engineering ,performance

Published at DZone with permission of Ajeet Dhaliwal . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}