What Are Some Additional Considerations for Automated Testing?

DZone 's Guide to

What Are Some Additional Considerations for Automated Testing?

The ability to write tests, understand automation, good coding skills, and broad vision are some of the concerns around automated testing for executives.

· DevOps Zone ·
Free Resource

To gather insights on the state of automated testing today, we spoke with twenty executives who are familiar with automated testing and asked, "What are some additional considerations to automated testing that we haven't addressed?"

While there were several, getting started with automation and the cost of automation and it's ROI were mentioned several times.


  • Where is your business going? What does your roadmap look like? Would you use more automation if it was already built in or do politics prevent it?
  • Among test engineers and manual testers, there’s a stigma that automated testing is difficult to do. There are plenty of resources available for manual testers to get into automated more and more. Record and playback are not as effective with web apps. Better for manual testers to learn about web technology and automation.
  • I think a big question is “who bears the responsibility for automated testing?” In short, everyone, from developers, QA and DevOps to product owners and stakeholders. Everyone should have an involvement in automation because the goal is to cover the bulk of your testing. We are aiming for 100% automation, which means that any manual testing is only exploratory.
  • If we ask people about quality and automated testing, are they considering performance, scalability, and security? Non-functional aspects of code contribute to the quality of the code. Do people know how to write automated tests?
  • It’s important to consider how often software developers are involved in the automation strategy for a given project. Just as test engineers are interested in how the product works, software development should be as equally interested in how their code is protected once the code has been checked in. Often, their insight is invaluable because they know how the code operates. Relaying that to test automation engineers to create optimized, simple to troubleshoot, useful test cases can be extremely beneficial. Our software release model is an example of how automation testing has helped our environment. We used to have multiple branches, merge issues, automation discrepancies, a large volume of manual testing and an environment that made it difficult to churn out solid quality hardware and software. With the addition of the AutoHeal / Bisect environments and a true commitment to automation across the board, we have re-invented our release process and have become fundamentally more agile in both our business and our technical abilities.
  • Change the QA mindset. Get away from traditional ways of testing. Apply standard core developer’s tool stack & technology to accomplish a faster testing process. Automated testing and Agile goes hand in hand.

The Cost of Automation

  • Think about the entire cost of automation. How much you must invest in people, environments, tools, and maintenance. How do you successfully estimate the cost of automation to reduce surprises down the road? Talk to people who have automated from the ground up. Understand the full cost and timelines of automation. Can be 2X or higher than original estimates.
  • Automated testing has a lot of use in augmenting manual testing, allowing the tester to engage in additional activities, such as fuzzing inputs, load testing, and examining possible race conditions. Looking at how your investment can help improve manual testing, not just replace some repetitive effort, is often an overlooked avenue. Also, being able to quantify the business value automated testing introduces/decreases and the overall optimizations in business flows that result because of it. Defining the business value of automated testing shifts the discussion from justification to enablement.


  • What do I do if I don’t have any testing, all manual, or no unit tests? Start with a small number of key tests. The goal is to ship with known quality. Figure where the API boundaries are and start there. There’s a long tail on the benefit of test coverage. Go from zero to 10 or 15% to see a significant benefit. Write one test and go from there.
  • List all the tools in the ecosystem to help the developers. Tools that are used for X, Y, and Z.
  • Listen for the future of automation. Do people want it? When is it likely to arrive?
  • Message to developers. Testing is a problem with developers, not testers. Shift left testing. Test early while still in development.
  • Principally two things I’ve covered above: the terrible state of visual tools, and whether and when it’s appropriate to have tests that are not stored as source code.
  • 1) We’ve been debating the value of having a mock service. If you’re using a third party, it’s not easy running automated testing. Is using a mock service the right approach or not? You may, or may not, catch all the scenarios. There are pros and cons based on deployment. 2) CD and automation are always used in a SaaS environment. CD versus automation for CD can be tricky because customers may be three or four versions behind. This makes it difficult to automate one-off scenarios.
  • Just one thing: who is going to win? Is it going to be open-source? Will it be Selenium? Or will it be a commercial tool such as Tricentis? I think there will be a co-existence. When people walk into UI automation, they do this through a small Agile team. They may want to start their endeavors through Selenium. If they have no shortage of super-skilled developers, they might stick with Selenium. But they will end up programming something that tries to come up with similar concepts to those in Tricentis Tosca. In the enterprise, you are going to reach a tipping point where your Selenium projects are not going to be feasible or maintainable anymore and the TCO will be very high. Then it will be time to switch over to an enterprise tool.

Is there anything we missed in this series of articles on automated testing?

Here’s who we talked to:

  • Murali Palanisamy, EVP and Chief Product Officer, AppViewX
  • Yann Guernion, Director of Product Marketing, Automic
  • Eric Montagne, Technology PM, Barclaycard
  • Greg Luciano, Director of Services and Amit Pal, QA Manager, Built.io
  • Donovan Greeff, Head of QA, Currencycloud
  • Shahin Pirooz, CTO, DataEndure
  • Luke Gordon, Senior Solutions Engineer and Daniel Slatton, QA Manager, Dialexa
  • Anders Wallgren, CTO, ElectricCloud
  • Charles Kendrick, CTO, Isomorphic
  • Bryan Walsh, Principal Engineer, NetApp
  • Derek Choy, V.P. of Engineering, Rainforest QA
  • Subu Baskaran, Senior Product Manager, Sencha
  • Ryan Lloyd, V.P. Products, Testing and Development and Greg Lord, Director of Product Marketing, SmartBear
  • Christopher Dean, CEO, Swrve
  • Wolfgang Platz, Founder and Chief Product Officer, Tricentis
  • Pete Chestna, Director of Developer Engagement, Veracode
  • Harry Smith, Technology Evangelist, Zerto
automated testing ,software testing ,devops ,automation ,enterprise

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}