The ROI of Automated Testing
While the ROI is not a definitive number, the time saved and the reduced number of defects justify any expense or inconvenience.
Join the DZone community and get the full member experience.Join For Free
To gather insights into the current and future state of automated testing, we asked 31 executives from 27 companies, "What are the most significant changes to automated testing in the past year?" Here's what they told us:
- Obviously, it’s faster to be automated versus manual, catching errors more accurately. We saved a million per year from time saved on developers not having to look for errors — extra information lets them know what and where the error is, going from days to minutes for a 20x savings. The reason it matters is faster releases ahead of the competition.
- Project and track ROI with engagements – the value driver. It gives the ability to release faster or reduce defects in production. Track the cost and the savings.
- Automated testing leads to easily 20 to 40x cycle time improvements spreading the work out across different machines, on builds and testing. Most companies already worked a little on that. Correctness first, performance second. There is still a lot to be done. JUnit Java Unit testing working on that to run more efficiently.
- Before and after having a grid – 10x faster moving to our platform. Business value analysis framework. Biggest areas of improvement are typically in the maintenance area – reduces manpower required.
- Getting started with automated testing costs time and effort. Selecting the right solution and making sure it fits the needs of the user or integrates with already existing tools and processes is key. For this reason, it may take some time until you see the benefit. Generally, our experience has been that after about the third development iteration or sprint, customers begin to see a return on their investment when comparing the time required to execute manual vs. automated tests. The more tests that you are able to automate, the faster the feedback about the quality of the system under test will be. And of course, there is nothing more expensive in QA than a quality issue that occurs on the customer side, which could have been identified much earlier with the help of automation.
- A nationwide UK case study shows five million pounds per year saved from an outcome-based contract with IBM while IBM increases the model. Cerner cares about the relationship of devs to test and moved from 6:1 to 15:1. Financial Times was concerned with time to market and took the release cycle from two weeks to three days. By automating the test cycle, they reduced it to less than a day from 1.5 weeks.
- Anyone cloud-native is seeing pretty huge saving from unexpected places. Automate all test execution you can require. Once you make the upfront investment, time saved writing unit test is tremendous. Devs writing unit test shortens feedback loop. Practice 100% test-driven development ahead of writing code maximizing ROI by not losing effort on what’s not central to business.
- Do a value assessment for each customer. Break into the following: testing time saves 50 to 90%; by shortening test cycle you shorten the delivery cycle including that of the developer (10% of overall R&D), retail online how much downtime and time customer unable to complete the transaction are you able to prevent (50 to 70%). It's less tangible what happened to brand reputation and brand assets. Increasing coverage like UI is ever-more important.
- No hard numbers. Frankly, efficiency is nice, but most important is deliver faster and don’t screw up. You achieve that with automation. Automate to practice build and deploy process and for the QA team. Everything speeds up the development process.
- There was no global study assessment of impact before and after 97% of the time out from weeks to import, design, and validate. The time dimension is the most important aspect but also the de-risking using automation tech is to be 100% accurate. Reduce time required from weeks to hours with greater accuracy.
- You can calculate the ROI to figure out manual versus automated. More complicated is how much time is saved on the development side waiting until late in the SDLC. Run more frequently learn about problems sooner when it’s easier to fix. NIST 2002 fixing the cost of a defect.
- We're working on the data now. Conservatively speaking, if you find a defect, the cost to fix it is 4 to 10x higher in production versus early in the SDLC.
- The biggest ROI of automated security testing is the reduction of risk and remediation costs. There was a 164% increase1 in breached records affecting almost 2 billion people (in the first half of 2017) and according to World Economic Forum2 estimates, this increase in cybercrime is costing the global economy roughly $445 billion a year. The 2017 State of Application Security White Paper by the SANS Institute4 states that over 43 percent of organizations are pushing out application changes weekly, daily or even continuously. The White Paper’s data also says that over 60% of the breaches were [partially or entirely] attributed to public-facing web applications. Organizations that implement continuous automated security testing as a part of their comprehensive application security strategy, covering all the inflection points of their SDLC, boost the efficiency and effectiveness of their application security testing, maximize its ROI, and better protect their applications. In addition to this, the visibility into the risk posture and remediation guidance offered by continuous, automated security testing can enable the organizations to remediate flaws early in the SDLC and achieve a reduction in remediation costs. References: 1) https://www.gemalto.com/press/Pages/First-Half-2017-Breach-Level-Index-Report-Identity-Theft-and-Poor-Internal-Security-Practices-Take-a-Toll.aspx. 2) https://www.weforum.org/agenda/2017/12/how-to-civilize-the-dark-web-economy-bef5311f-704d-467b-b005-6aa80a40f46b. 3) https://www.sans.org/reading-room/whitepapers/application/2017-state-application-security-balancing-speed-risk-38100
- It's hard to measure, but we’re working with limited resources and are able to do more than 10x larger organizations and developers. We are moving forward; we are not breaking existing code. Implement a test-driven approach with clear requirements and guidelines developers are continuously improving the product without dealing with bugs and fixes.
- Everyone is different. We define success metrics early on. The number of tests automated, translate into time and cost saved by automation. Multiple by coverage across environments. Plug in values and see savings. 2) Defect leakage not typically measured. How many defects is automation tested finding and fixing each night? Reuse automation test as application keeps expanding. Automate intelligently.
- Automated testing is far superior to manual. The more complex the part that needs to be checked, the more likely it is to be caught by automated testing. Getting super complex with 3000+ enterprises and different OSS types of files and environments add complexity to the next level. All these combinations led to great complexity. The amount of risk of having a bug and irritating a customer is not worth the risk.
- ROIs vary in modern organizations, global 2000 companies are mobile first, update browsers, normal infrastructure with a risk of app failure quite high using model-based test automation approach saves 90% of the overhead associated with the change. Dealing with a model rather than code. Since model-based test application, you don’t need access to the application to model the changes. This allows you to shift the work associated with changing the test script early without the necessity of connecting to all the variants of the end to end system. You can run in parallel. Support more than 120 different application types (i.e., SAP, SFDC . . .). Selenium just works on web browsers. We still need to find solutions for other platforms.
- For both us and our customers: 1) shorter time to market. Adding major new features often destabilizes a product, so project plans typically make sure there is a long period of testing and stabilization after major features are expected to be completed. We have such a comprehensive test suite that in many cases we are comfortable with shipping a new version right after major features have been added, so we get those features to market quicker, and don't have the problem of postponing features because they can't be added so close to the target release date. 2) lower maintenance costs. Often one bug fix introduces another bug. Our test suites catch this kind of thing, so we avoid extra cycles. 3) retaining customers. Customers can be very upset when regressions are introduced or features they rely on break. Test suites avoid this problem, which helps to retain customers.
- For us, automated testing has had a direct correlation with customer satisfaction. The product is simply running better, and customers are happier. So, delighting our customers is our biggest and most important ROI :) In the past, launching a new on-premises release of SysAid required one month of testing and work divided between all the developers and QA. It was total code freeze in order to create a stable release. Today, with automated testing, it takes one week, one developer, and one QA to create an on-premises stable release. The remaining staff is busy moving ahead and no time is lost - super ROI!
- ROI as an engineer is related to confidence. What is the test that will give us the biggest bang for the buck? Most confidence for least amount work. Technical debt – develop complex software with good testing to reduce the debt. Thorough unit testing as you make changes to the code and deploy with confidence. Don’t need to run entire system and still deploy with confidence. Confidence in the individual unit you can iterate on that unit.
- ROI two-fold: 100% test coverage. Test case creation time – a dramatic reduction. A study with Forrester revealed cost, efficiency, and test coverage saving. Once test cases and user stories were defined, dev efficiency and productivity improved. We gained the ability to recreate a defect and provision resources. It enabled service virtualization for dev, build, and test provision. We could simulate any systems – credit card, mainframe. It removed constraints for dev and test by providing synthetic test data on demand to emulate production-like test environments without compromising PII. It's compliant with GDPR and helped us shift left performance testing. All tools are heavily based on proprietary tools and offer easy solution based on open source – we provide test coverage at speed, web, mobile, microservices, and APIs with a private cloud feature. This is complimentary with open source. We orchestrated it with CDD (Continuous Delivery Director) end-to-end testing and management. Analytics show where the bottleneck exists.
- Gil Sever, CEO and James Lamberti, Chief Marketing Officer, Applitools
- Shailesh Rao, COO, and Kalpesh Doshi, Senior Product Manager, BrowserStack
- Aruna Ravichandran, V.P. DevOps Products and Solutions Marketing, CA Technologies
- Pete Chestna, Director of Developer Engagement, CA Veracode
- Julian Dunn, Director of Product Marketing, Chef
- Isa Vilacides, Quality Engineering Manager, CloudBees
- Anders Wallgren, CTO, Electric Cloud
- Kevin Fealey, Senior Manager Application Security, EY Cybersecurity
- Hameetha Ahamed, Quality Assurance Manager, and Amar Kanagaraj, CMO, FileCloud
- Charles Kendrick, CTO, Isomorphic Software
- Adam Zimman, VP Product, LaunchDarkly
- Jon Dahl, CEO and Co-founder, and Matt Ward, Senior Engineer, Mux
- Tom Joyce, CEO, Pensa
- Roi Carmel, Chief Marketing & Corporate Strategy Officer, Perfecto Mobile
- Amit Bareket, CEO and Co-founder, Perimeter 81
- Jeff Keyes, Director of Product Marketing, and Bob Davis, Chief Marketing Officer, Plutora
- Christoph Preschern, Managing Director, Ranorex
- Derek Choy, CIO, Rainforest QA
- Lubos Parobek, Vice President of Product, Sauce Labs
- Walter O'Brien, CEO and Founder, Scorpion Computer Services
- Dr. Scott Clark, CEO and Co-founder, SigOpt
- Prashant Mohan, Product Manager, SmartBear
- Sarah Lahav, CEO, SysAid Technologies
- Antony Edwards, CTO, Eggplant
- Wayne Ariola, CMO, Tricentis
- Eric Sheridan, Chief Scientist, WhiteHat Security
- Roman Shaposhnik, Co-founder V.P. Product and Strategy, Zededa
Opinions expressed by DZone contributors are their own.