What Are Your Automated Tests Really Worth?
What Are Your Automated Tests Really Worth?
A look at the economics of automated tests, and how to get the maximum possible return on investment.
Join the DZone community and get the full member experience.Join For Free
Do you need to strengthen the security of the mobile apps you build? Discover more than 50 secure mobile development coding practices to make your apps more secure.
Only the blissfully ignorant or the insanely confident would forgo automated testing in a modern development project. But automated tests have a cost, in terms both of development and maintenance. In this article, we take a look at the economics of automated tests, and investigate what we can do to get more bang for your buck out of your automated test suite.
Automated Tests Have Value
No one would question that an automated test has value. But not all tests are created equal, and it is sometimes worthwhile to step back and question where this value comes from.
They Save Testers Time
The obvious value of an automated test comes from the time saved in manual testing, and the faster feedback on regressions. Automated tests free up manual testing efforts for deeper, more intelligent testing such as exploratory testing.
This is the easiest and most obvious way to measure the immediate value of an automated test suite: how much time (and money) would it cost for manual testers to perform the automated test suite by hand.
However, there are other, less tangible ways that automated tests provide real value.
They Reduce the Fear of Change
Automated tests provide much faster feedback when things go wrong. Faster feedback from automated tests (whether run locally or on a build server) makes it easier for developers to ensure that their changes don't break existing work, and reduces the time wasted during integration.
But the real benefit of faster feedback for developers is that they end up less hesitant to make changes (after all, they know that the tests will tell them if they break anything), which in turn leaves more space for innovation and creativity.
Some teams working on very large projects distinguish between automated tests that exercise core components and features that are in development, and more stable features that are less likely to change, running the regression tests for the latter on a less frequent basis. This lets them provide much faster feedback on the high risk and more volatile areas of the application, while still providing some level of guarantee against regressions in more stable areas.
They Provide Feedback On Progress and Living Documentation
In addition, when automated tests are used in the context of BDD practices, they can also give real-time feedback on progress, and can be used to document what the application does.
The Longer the Project, the More Valuable the Tests
Automated tests are designed to be run a large number of times. And since automated tests provide value each time they are run, the total value they provide increases as long as the project goes on:
A logical consequence of this is that the earlier you start writing automated tests, the more value they will provide.
Automated Tests Are Not Free
Automated tests also have a cost, primarily in the time they take to write and to maintain, but also in other areas. These costs needs to be factored in when you decide what tests to automate, and in what order. More importantly, these costs need to be minimised if you want to get the most value out of your automated test suite.
Poor Design Has a Cost
Maintenance costs usually take the form of changes you need to make to a test to cater for application updates. If the test suite is not very well designed, maintenance often also includes changes you need to make to existing tests or test components when you add a new one.
Additionally, if the test framework is not well designed initially, adding new tests can become progressively harder, as adding a new test inevitably involves modifying (and re-testing) components used for existing tests. In extreme cases, this can even outweigh the value provided by the test suite, making the testing efforts unsustainable:
Some applications are easier to test than others, and this too has an impact on the cost of the automated tests. Teams where the developers and testers collaborate closely to ensure that the application is easily testable observe that writing automated tests becomes a great deal easier.
Flaky Tests Have a Cost
But there are other costs. If the tests (or the application) are unreliable or "flaky" (or even if they are perceived as such), they will take time to troubleshoot to determine if they are caused by a genuine regression, or by an issue with the test code. This introduces a costly manual step that wastes developers and testers time and reduces the time savings the automated tests are supposed to bring.
Flaky tests reduce the confidence in the automated test suite, which results in more manual testing and reduces the savings in manual testing that the automated tests should bring.
Slow Feedback Has a Cost
If the tests are not designed to run quickly, then increasingly slow feedback can also be a cost. As the test suite gets bigger, and takes longer to run, the tests take longer to provide feedback. The slower the feedback, the less useful it is to developers, and the more time it will take to address issues raised by the automated tests.
Not All Tests Are Created Equal
The calculation of value based on savings in manual testing time is useful and intuitive. However, it is based on value in terms of cost savings, not in terms of added value. We should also consider the value of the reduction of risk and the increased confidence that the application is fit for purpose.
What's a Team To Do?
What can we do to ensure that our test suite doesn't end up costing more to maintain than it saves? How can we ensure that our test automation efforts are not wasted in areas that will provide little return on our investment?
Build SOLID Foundations
One of the most important aspects of any testing framework, and one that we too often see neglected, are the foundations. The choice of tools is important, as is the choice of appropriate patterns and conventions. Well-written test frameworks follow all of the normal rules for good code design, for example:
- They respect fundamental design principles such as DRY (“Don’t Repeat Yourself), SRP (Single Responsibility Principle) and OCP (Open/Closed Principle).
- They unit test non-trivial framework or infrastructure test code. It may sound odd to write tests to test your test code, but it saves a huge amount of time troubleshooting flaky tests in the long run.
- They are regularly maintained: Just like application code, automated test suites benefit from regular refactoring to reduce technical debt and ensure consistency and maintainability.
Automated test should be designed and implemented with the same level of quality, if not more, than production code.
Know Your Team
Testing teams are typically made up of a mixture of individuals, with different specialities and varying levels of experience. Some may come from a development background and be well versed in software engineering design practices and patterns; others may come more from a pure QA background, and have an eye for the most important things to check in a particular feature.
Approaches such as the Journey Pattern, for example, are designed to allow testers with less development experience build automated tests using highly-reusable components that are written and maintained by more experienced test automation developers. Pair programming when writing automated tests is also a great way to teach less experienced testers and to encourage consistent development practices.
Focus On the Value
It is hard to automate everything. If you need to choose, prioritise tests that will reduce risk and save manual testing time.
Conclusion — Consider Automated Tests An Investment
Automated tests should be seen as an investment, with the aim of reducing risk and accelerating delivery. To get the most return out of your investment, make sure your test suite is well designed, well implemented and that it focuses on testing high-value features and high-risk areas of the application first.
Published at DZone with permission of John Ferguson Smart , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.