The following was contributed to DZone by Lubos Parobek, VP of Products, Sauce Labs.
In order to address the pressure around the need to quickly release quality software, organizations continue to adopt agile development techniques. Examples include continuous integration and continuous delivery (CI and CD). On the flipside, the traditional waterfall approach to building software is quickly becoming dated.
In their groundbreaking book, Continuous Delivery, Jez Humble and David Farley define what a typical CI/CD deployment pipeline is, describing it as “an automated implementation of your application’s build, deploy, test, and release process.” They go on to recommend that as much of this pipeline should be automated as possible.
An integral part of a successful CI/CD pipeline is automated (versus manual) testing. The execution of pre-scripted tests on a web or mobile app saves considerable time, plus having test data accessible in detailed reports is valuable to development teams who can use this information to quickly identify issues. In short, automated testing is key to achieving a true CI/CD deployment.
But how do you know when to make the step towards automation? I recommend asking yourself the following ten questions:
Is the test executed more than once?
Is the test run on a regular basis (i.e. often reused, such as part of regression or build testing)?
ls the test impossible or prohibitively expensive to perform manually, such as concurrency, soak/endurance testing, performance, and memory leak detection testing?
Are there timing-critical components that are a must to automate?
Does the test cover the most complex area (often the most error-prone area)?
Does the test require many data combinations using the same test steps (i.e. multiple data inputs for the same feature)?
Are the expected results constant (i.e. do not change or vary with each test)? Even if the results vary, is there a percentage tolerance that could be measured as expected results?
ls the test very time-consuming, such as expected results analysis of hundreds of outputs?
ls the test run on a stable application (i.e. the features of the application are not in constant flux)?
Does the test need to be verified on multiple software and hardware configurations?
If you answered “yes” to most these questions, then automation is likely right for you. Naturally any task that is repeated often and is labor-intensive is often the best candidate for automation. The end result is the ability to get the most from your CI/CD workflows, and being able to ship higher quality software faster.
But despite its benefits, it’s important to note the cases where automated testing doesn’t make sense. I’ll call them out here:
Usability tests, which are conducted to discover how easy it is for users to accomplish their goals with your software. There are several different approaches to usability testing, from contextual enquiry to sitting users down in front of your application and filming them performing common tasks. Since usability, consistency of look and feel, and so on are difficult things to verify in automated tests, they are ideal for manual testing.
One-off tests that are run only once or infrequently will not be high-payoff tests to automate and are better done manually. When one-off tests keep coming back after a point, it then makes sense to consider automating them too. A good practice is to set a threshold for how much one-off testing your organization is willing to keep manual.
The right technical approach involves knowing which tests to automate and which to continue manually. Regardless, organizations that invest in automated testing efforts are poised to get the most out of their CI/CD workflows.
The result is shipping higher quality software faster, offering huge competitive advantage in an increasingly competitive business environment. In short, considering the move to automation is worth it by leaps and bounds.
About the Author
At Sauce Labs, Lubos Parobek leads strategy and development of Sauce Labs’ web and mobile application testing platform as VP of product. His previous experience includes leadership positions at organizations including Dell KACE, Sybase iAnywhere, AvantGo and 3Com. Parobek holds a Masters in Business Administration from the University of California, Berkeley.