Roadblocks to Automated Testing
According to industry experts, a corporate culture holding onto manual testing is the greatest issue affecting the advance of automated testing.
Join the DZone community and get the full member experience.
Join For FreeTo gather insights for the current and future state of automated testing, we asked 31 executives from 27 companies, "What are the most common issues you see affecting automated testing?" Here's what they told us:
Corporate Culture
- Companies are still siloed without clarity of expectations between developers and security. Brittle functional and unit tests need to be written so they can absorb changes without breaking. Write tests to be durable over time. Understand why tests break. Determine what you need to do to make the tests more resilient.
- You first have to have a test infrastructure in place similar to ours, where you are catching regressions and able to notify developers appropriately. At that point, you need clear policies for what is done when regressions are detected: who is assigned to fix them, how fast must they be resolved vs completing other tasks, what happens to ambiguous regressions (is the code wrong or is the test wrong), etc. We’ve seen a recurring type of dysfunction in several organizations: they’ve built an automated test system, but the noise from broken tests is drowning out the signal from the working tests, so everyone ignores the test system. That’s worse than having no automated test infrastructure at all. You have to actively maintain both the tests and the people processes around them, or you end up with this particular dysfunction.
- Legacy software and platforms. Clients trying to be cloud-native but still interfacing with mainframes. 2) Engineering culture. QA is viewed as a secondary citizen which is backward and wrong. Fosters cowboy mentality on the development team. Tooling becomes important, so you don’t waste developer time.
- Companies who want to do the minimal testing required to get their certificate of insurance, so they have coverage if they have a breach.
- If there is bad communication between R&D and the automation team, then this can be detrimental to the new automation process. People are people. Even with the highest level of technology, it always comes down to the people. When we’re all in sync, the technology (the automation) will succeed. Another issue is with the automation infrastructure - it must be flexible enough to accept any changes on the dev side of your product, keeping the maintenance to a minimum.
- Having companies internalize and define what business metrics they want to optimize. More about outcomes than outputs and tying to business value.
- Multiple independent siloed approaches to testing in a single organization. Unicorns have highly distributed isolated components. Impact of change is zero you don’t have a silo problem. Compliance and risk around changing legacy systems is challenging.
- Scanners don’t provide the measure of coverage. No guarantee of coverage. Don’t expose actual testing coverage. As DevOps brings people together we can see security and test in silos we need to get them working together.
Manual Testing
- Tools are good these days. Where manual processes have been used, people need to be retrained. Just like DevOps. A cultural issue. Retraining required. Turning people into programmers – management doesn’t want to ruffle feathers. A lot of other processes in SDLC has been automated and accelerated so QC doesn't slow the process.
- Moving from manual to automated. Learn how to write tests. Manual testing is going to go away. Training and education aspect. Then to have your test and integrate into the DevOps pipeline. Companies run tests but don’t look at results. Once you have them people struggle with poorly written, not independent of other, timing dependent then ignore or disable. Things that are timing or environment dependent can be problematic.
- Old style testers are not adapting to, embracing, automated testing and AI. Technology coverage needs to keep up as websites get more dynamic and UIs more graphic and intuitive, and facial recognition and fingerprints. Performance testing key is for UX. Using automation beyond test execution.
- Manual QA testing as a bottleneck.
Other
- There was a time when people didn’t believe in testing. Not the case any longer. Frontend testing was a thing few people did. Now can do visual differences. Make changes to CSS with confidence. Looking at the system as a whole to have screenshots in code reviews to test the whole stack.
- Shift left. Used to doing manual testing but moving to 100% automation. Need more technical skills. It's a matter of days for the manual tester to learn the skills they need. Selenium uses general languages people are familiar with. Acceptance and development easy to build test suite while building the application to ensure the feature is bug-free. Plugged into CI/CD pipeline for deployment.
- Once we identify the value driver then we hit the blockers that can span two or three families: 1) skillset – getting this done (automation and programming) is difficult, adjust expectations and plans; 2) stability of infrastructure – DevOps, tests (20% false negatives), labs; and, 3) understanding of digital coverage what does it mean to test a real user environment in today’s world? Different UX expectations. Help understand and automate. Inability to get over the amount of noise the results are creating. The number of tests and amount of data being generated, smart analytics, zoom in quickly to see what’s gone wrong.
- Keeping up with all of the changes to browsers and platforms and how to manage and use all of the data produced by the testing tools.
- There are two major issues affecting automated security testing. The first major issue is the surfacing of inaccurate results. Developers will simply ignore automated test results if those results are proven to be extremely noisy. The second major issue is the lack of meaningful integration within developer tooling. Security teams continue to record, manage and disperse their security data through GRC systems. Developers do not speak GRC. They speak JIRA, TFS, Trello, etc.
- This is hard to say. Maybe more design patterns or standards for writing testable code. It’s disconcerting to see that even modern software tool providers like Salesforce or big brands like Apple are not taking “design for testability” into consideration in order to make test automation easier. A second thing is more education regarding software testing in universities and colleges. Universities tend to focus on software development rather than teaching testing skills. This makes it challenging to grow solid software testing people – especially in the field of automation.
- Rapid technology advancement every day. Need to follow updates. Move to the new versions of testing and code. Don’t push the updates into a later stage.
- 1) Dynamic applications with new frameworks, technologies, ways to write applications. Since underlying applications change, tests become flaky. We build a test to address this problem. 2) Need to be technical to automate. Writing code is not always easy. We solve the problem by codeless automation so non-tech team members can get up and running with automation. 3) Achieving high test coverage with web, mobile, and desktop applications. Need several tools and tools to work together. Identify what environments are important.
- Need tools to be an integral part of DevOps toolchain. Using many tools for the delivery process (Jenkins, JIRA, Slack, GitHub) want to see AI testing as part of that. Want one comprehensive, automated, visible, delivery process to share the feedback from the different tools. We have plug-ins for each of these. Another point of ability to share feedback across the organization – unit, component, integration, end-to-end testing to deployment. Don’t have a means to share what they share. Collaboration is very important. Digital transformation executives also want to be involved in the process. All see and learn from each other.
- Reporting is first and foremost. A bunch of automation but no good environment beyond Jenkins or Bamboo to see the results. Only know coverage. Automation lacks QA involvement. Lack of intelligence in automation to recognize you need coverage and requirements traceability. Need end-to-end unit tests – different sets of automation with different sets of tools.
- People have not fully understood the issue of failure and its impact. People with deep skills in networking don’t understand how things change when you go from a hardware world to a software world. The first wave of network testing automation has some failures.
- Customers are leveraging Selenium in CI/CD have to pull developers to write the tests. Automated testing is way behind product development because developers are not available to write the tests. Our platform enables tests to be written in English rather than code. It enables customers to leverage their resources.
- I think that the most common issue that I've seen affecting automated testing is the over-reliance on it. Cindy Sridharan wrote a great article about this. It's all about the unknowns–automated testing continues to be a powerful and useful way to validate/test things that you've identified. That could be anything from the problems that you've seen to the workflows that you are trying to optimize for. What it doesn't do is it doesn't give you anyway to validate the actual user interaction of your code or how the code itself is going to interact in places you didn't foresee. So, if you don't know about it, you can't write the test for it. Over-reliance on automated testing, or static use of automated testing where you're not updating it, can be the real challenge. You want it to be something that you continually update and evaluate over time, "Am I testing the right things?" And you need tooling in place, like feature management, that allows you to mitigate risks associated with outages or failures in case you miss anything in automated testing.
Respondents
- Gil Sever, CEO and James Lamberti, Chief Marketing Officer, Applitools
- Shailesh Rao, COO, and Kalpesh Doshi, Senior Product Manager, BrowserStack
- Aruna Ravichandran, V.P. DevOps Products and Solutions Marketing, CA Technologies
- Pete Chestna, Director of Developer Engagement, CA Veracode
- Julian Dunn, Director of Product Marketing, Chef
- Isa Vilacides, Quality Engineering Manager, CloudBees
- Anders Wallgren, CTO, Electric Cloud
- Kevin Fealey, Senior Manager Application Security, EY Cybersecurity
- Hameetha Ahamed, Quality Assurance Manager, and Amar Kanagaraj, CMO, FileCloud
- Charles Kendrick, CTO, Isomorphic Software
- Adam Zimman, VP Product, LaunchDarkly
- Jon Dahl, CEO and Co-founder, and Matt Ward, Senior Engineer, Mux
- Tom Joyce, CEO, Pensa
- Roi Carmel, Chief Marketing & Corporate Strategy Officer, Perfecto Mobile
- Amit Bareket, CEO and Co-founder, Perimeter 81
- Jeff Keyes, Director of Product Marketing, and Bob Davis, Chief Marketing Officer, Plutora
- Christoph Preschern, Managing Director, Ranorex
- Derek Choy, CIO, Rainforest QA
- Lubos Parobek, Vice President of Product, Sauce Labs
- Walter O'Brien, CEO and Founder, Scorpion Computer Services
- Dr. Scott Clark, CEO and Co-founder, SigOpt
- Prashant Mohan, Product Manager, SmartBear
- Sarah Lahav, CEO, SysAid Technologies
- Antony Edwards, CTO, Eggplant
- Wayne Ariola, CMO, Tricentis
- Eric Sheridan, Chief Scientist, WhiteHat Security
- Roman Shaposhnik, Co-founder V.P. Product and Strategy, Zededa
Opinions expressed by DZone contributors are their own.
Comments