Keys to Automated Testing
Keys to Automated Testing
Save time and improve quality while realizing the goals and objectives of the organization with these key benefits of automated software testing.
Join the DZone community and get the full member experience.Join For Free
To gather insights for the current and future state of automated testing, we asked 31 executives from 27 companies, "What are the keys to using automated testing to improve speed and quality?" Here's what they told us:
- Automation helps the customer with quality assurance process run in minutes and hours versus days and weeks. Removes errors. Consistent test results. Benefits of automation versus manual. Test against a huge range of devices at a high speed. Fewer errors, more consistency, broader coverage. Ensure it runs faster by running in parallel. Top 20 panels for 90% of their users for speed.
- 1) Value drivers are quality and velocity. Prioritizing those are the first conversation we have. 70% of the time clients are looking for both and it can take multiple meetings for them to prioritize between the two. 2) A huge shift in the market preferring velocity as a value driver. Quality is moving from risk mitigation to a velocity enabler. Developers are taking ownership of quality. Security is becoming part of quality. Who is accountable in the organization? When it’s the CISO it’s not integrated into the SDLC. When it’s DevOps you can integrate security (DevSecOps). The transition is slow – 30% penetration so far.
- Going from manual to automation can be hard to get to "yes, we should automate that." One of the biggest opportunities software companies have is to take the slack out of their development and ensure everything is tested. Automated testing has won the logical value. UI testing is still one of the trickier ones.
- To what end are we doing all this testing? Two kinds of software companies 1) companies produce packaged goods software into customer adoption and they don’t have control. 2) Cloud-native software doesn’t have to ship anything. Different ramifications of testing. Keep updating software leads to very different testing requirements. Figure out which camp you belong to. Bring cloud-native development practices into embedded hardware software. All software engineering practice continuously update cloud-native software and devices (IOT). Invest in testing to ensure you can release every couple of weeks.
- Visual AI. A lot already use test automation. Many still do manual testing as well. Big portions have some automated and still do manual on the side. It’s easy to see the limitations of manual. Want to add visual AI to see how they minimize the number of missed defects. Happens more frequently than before because the environment is so dynamic due to browser updates. Need to monitor 24/7. Need to go faster and minimize undetected errors. Companies doing manual testing want to automate but it requires a lot of effort and expensive manpower. They try to automate manual faster with less qualified personnel. Identify specific actions they’d like to check without writing a script. Make simpler and more affordable. When AI is there it’s easier to maintain tests. When AI driven maintenance can be done automatically, the application evolves.
- Two versions -- run yourself or we’ll run. Content migration is easier if it's done onsite. Online we provide tools to read all files and automatically upload to cloud. 10TB of data takes two days. Upload metadata first. Most active files get the first priority on the upload. On the testing side, there are very complex test cases with APIs and web interfaces. Framework for unit and system tests firsts. Test each build. Performance testing to monitor the speed of file uploads and downloads. Continuous integration. Do release testing manually. Then for improving automation bring in new tools to test UI like Selenium to reduce manual testing and evolve to automated testing to increase the speed of release for multiple release cycles. Provide APIs to customers so they can integrate. Switched to Git to make APIs publically available.
- I think both of these things go hand in hand. There are things that you do not get right in automated testing, will go against speed and quality: 1) Stability: this is probably the biggest challenge and the aspect that could damage engineering teams the most. On one hand, flaky tests are not only slowing you down because the tendency is to rerun them until they are successfully passing, they are also contributing to a lack of trust in the tests. This leads to a pattern of just assuming your tests are flaky and ignoring test failures which could be hiding real bugs that are not investigated and solved. On the other hand, there is this false sensation of security because you have tested and that is supposed to guarantee quality, but if they are flaky you might think you have your back covered but you have not. 2) Speed: as a developer you want feedback and you want it as soon as possible. You need to have not only reliable test but relatively fast, so that feedback is close to instant and developers can improve and fix their code as they go. 3) Scope: just having fast and reliable automated tests is not enough. You need to have some level of coverage to build confidence. It’s a good practice to cover first your priority one cases or determine a minimum percentage of code coverage. 4) Dependencies management: automated tests often require the use or spin up dependent services or libraries. That could make tests very slow and brittle, so generally mocking dependencies is a common practice. The problem is that you blindly rely on your defined assumptions about how the dependency would behave, and its behavior could change over time without your tests detecting it. This could be mitigated by watching changes on that side, but that is not realistic in medium to large codebases. Practices like consumer-driven contract tests protect you from this in the context of REST APIs.
- The primary key to using automated security testing to improve speed and quality of developer output is the accuracy of results. Every automated security testing output is going to list a series of potential security vulnerabilities. The problem is that if they are not real and actionable, the automated testing will be viewed as a time sink by developers. To overcome this challenge, we have a team of security engineers who work in our “Threat Research Center” that we use to manually verify results before presenting them to developers. In addition, we leverage our 15+ years of vulnerability data to identify those characteristics that distinguish true positives from false positives with high confidence. By viewing our raw vulnerability data through this lens, we are able to automatically surface vulnerabilities that we believe to be a high confidence true positive, without requiring manual verification. At the end of the day, if you’re just producing a lot of noise in a fast and agile way, you’re just wasting the developer’s time.
- You first have to have a test infrastructure in place similar to ours, where you are catching regressions and able to notify developers appropriately. At that point, you need clear policies for what is done when regressions are detected: who is assigned to fix them, how fast must they be resolved versus completing other tasks, what happens to ambiguous regressions (is the code wrong or is the test wrong), etc. We’ve seen a recurring type of dysfunction in several organizations: they’ve built an automated test system, but the noise from broken tests is drowning out the signal from the working tests, so everyone ignores the test system. That’s worse than having no automated test infrastructure at all. You have to actively maintain both the tests and the people processes around them, or you end up with this particular.
- 1) People involved in improving speed and quality. QA team is shrinking. Speed across SDLC. Need more people with quality mindset. 2) To achieve you need a process in place. Go deep into the forest without big picture. Need strategy up from to test early often and the right thing. 3) Tools – consumable, integrate with a wider ecosystem.
- Must integrate automation earlier in the SDLC. Different kinds of automated testing along the process. QA needs to be a product manager for quality that introduces non-functional requirements across the products and product teams for features and robustness reduces the fragility of the product. Our platform is an abstraction on top of the dev tooling itself. It gets interesting with multiple tools and a release manager asking about the quality of a mix of 200 different projects. Developers and testers need insights on quality. User acceptance, performance, long-running integration, exploratory testing. Need to roll into decisions release manager will make before software goes live. We have a module that acts as a central hub for all testing activities. Unit testing, integration tests, a suite of integration tests, automated tests that the test team uses for the smoke test are all centralized.
- Humans are only able to cover 6% of the code. Automation enables you to approach 100% code coverage depending on how you write and run the tests.
- We are working with continuous integration (CI) methodology, which allows us to check our builds several times a day to make a full sanity check on the entire system. In addition, every night, we run regression tests for full test coverage in order to make sure that any changes on one component in the system does not negatively affect another component of the system; and if there is a regression, we catch it immediately.
- Unspoken aspect of AI development – configuration parameters, how many layers, which architectures. Keys to better performance are locked up in configuration parameters. Need to be set up front by experts. We help automate an ensemble of state of the art techniques. Hyperparameter optimization. Reduce trial and error. Do unit and regression tests, the core functionality of API internal evaluation framework to test algorithms to ensure they are not degrading. A/B testing to ensure making a practical statistically significant difference for customers. Our customers can use us for automatic testing of their algorithms retuning configuration parameters.
- Spend time talking about QA strategy versus execution. What does QA need to do for faster releases better quality? Ensure we give them a solution rather than just a tool. Spend time talking about strategy.
Know Goals and Objectives
- First of all, it’s important that organizations understand their goals. If the main strategy and the main goal is not communicated to the development and testing organization, it will be hard to prioritize when and where to get started with test automation. Many organizations are facing the traditional chicken-egg issue. “Too busy to improve.” “We don’t have the time to think about test automation, but we know that we are going to fail if we don’t have it in place soon.” For this reason, planning is so important. Identify those test cases which are going to be executed the most and start automating them. “Smoke Tests” are a common synonym for tests checking very little and basic functionality of a software. Integrate them into your continuous integration environment or with some other scheduling tools to ensure a daily execution. Do this step by step. Make sure the basic test scenarios are stable before you start thinking about automating the complex scenarios. We just published a 26-page white paper on this topic. Anyone who wants to learn more about successful automation can visit our website and download it for free.
- Application security in apps is around scanning. Open source tool more effective and more efficient. Try to be more focused. Scanners are good at finding the issues. We ask what are key business risks and concerns? Based on that we come up with security requirements and tests that exercise those specific requirements. Test efficient and fast into SDLC what’s important to you.
- The key in my mind, and why I'm so passionate about what we do, is that if you're going to move faster you need to build protections in your system for when things go sideways. We all know that quote from Zuckerberg, "Move fast and break things...but with a stable infrastructure." This is the whole premise behind that. If you're going to do things in a more automated fashion, then you need to make sure you build protections into your system that can account for when things go very wrong. Things will go wrong, that's just the reality of it. And it may be something that isn't even due to your code—it could be something where you make a change to your code that impacts some other third-party service that you're leveraging, and that unforeseeable impact is something that you want to be able to recover from quickly.
- The culture around security is still not right. It’s typically still at the end of the SDLC and the development and security teams are not integrated.
- Move automated testing earlier in the process. Do quality inspection. The Gold Witch – doing QC at the end of the process wasting time to work through bottlenecks. Wasting time sending low quality work through bottlenecks.
- Autotest at multiple levels integration tests, end-to-end system tests, and continuous performance monitoring. Observability. Going through dev workflow everything starts with a small block of code to behave as you expect. Does it behave in a production environment in the way we expected?
- Moving from early adopter to early mainstream. Sophistication is changing, less mature. A key is well-written tests that are atomic and autonomous tests. Long tests are not good when you want to run tests in highly parallel situations. Make sure applications are written in a way they are testable. Test-driven-development or behavior-driven-development really helps with that. Devs write tests help make sure you’re successful with automated testing.
- B2B and B2C businesses built on top of microservices. Working in an agile environment with constant improvements. Make sure nothing breaking. Automation of the entire pipeline is critical with all testing. Bread and butter of day-to-day operations. Critical to business advancement. 120 different projects each with own testing, deployment, highly complex, scalable, and advanced. Follow best practices to deter bad actors. Deployment, testing, visibility to the entire team, and fully automated. Don’t invent new things, follow best practices. Implement them quickly.
- Correlation between amount of testing and user satisfaction is zero. 20 years ago we were doing verification and validation. Today no one does validation just verification. Compiled testing against specification you don’t have. Orient testing around user satisfaction. Identify the user journey and define the test you need to optimize the UX. What’s the kind of data they’re using? Bringing in analytics tools – Google Analytics to see where you’re losing users. Focus on that page. Defect triage. User experience rather than weird compliance checks. What breaks software is when people use the software in a way it wasn’t foreseen by the developer or the tester. We need to define high-level user journeys. Have basic test cases and then implement in crazy permutations. We need real user journeys rather than hypothesized user journeys.
- The area around continuous testing is evolving. As DevOps takes hold QA becomes the greatest bottleneck. 63% of DevOps organizations see QA as a bottleneck. Manual testing is a core problem. Organizations need to be able to do end-to-end DevOps testing throughout the SDLC. Testing needs to shift left at conceptualization.
- Automated testing for network development important factors can be very complex can’t see the physical configuration. The process of debugging and testing can take days/weeks/months push button and test model automatically. Automated model-based testing.
- Biggest idea digital transformation driving the modernization of all processes. Being driven by enterprise DevOps. 50% of the push is the organization delivering using legacy systems. The other 50% if you are going to modernize you must do something different to achieve repeatability and sustainability. We help them make the process work. We recognize this is a process transformation journey. What approach are you going to take? A model-based test automation. Our solution is not scripted. Most open source and legacy systems are scripted are like code and creates a maintenance burden that does not allow the organization respond to the change necessitated by DevOps. Testing is a big barrier because it is scripted. It results in the “maintenance trap.”
- Gil Sever, CEO and James Lamberti, Chief Marketing Officer, Applitools
- Shailesh Rao, COO, and Kalpesh Doshi, Senior Product Manager, BrowserStack
- Aruna Ravichandran, V.P. DevOps Products and Solutions Marketing, CA Technologies
- Pete Chestna, Director of Developer Engagement, CA Veracode
- Julian Dunn, Director of Product Marketing, Chef
- Isa Vilacides, Quality Engineering Manager, CloudBees
- Anders Wallgren, CTO, Electric Cloud
- Kevin Fealey, Senior Manager Application Security, EY Cybersecurity
- Hameetha Ahamed, Quality Assurance Manager, and Amar Kanagaraj, CMO, FileCloud
- Charles Kendrick, CTO, Isomorphic Software
- Adam Zimman, VP Product, LaunchDarkly
- Jon Dahl, CEO and Co-founder, and Matt Ward, Senior Engineer, Mux
- Tom Joyce, CEO, Pensa
- Roi Carmel, Chief Marketing & Corporate Strategy Officer, Perfecto Mobile
- Amit Bareket, CEO and Co-founder, Perimeter 81
- Jeff Keyes, Director of Product Marketing, and Bob Davis, Chief Marketing Officer, Plutora
- Christoph Preschern, Managing Director, Ranorex
- Derek Choy, CIO, Rainforest QA
- Lubos Parobek, Vice President of Product, Sauce Labs
- Walter O'Brien, CEO and Founder, Scorpion Computer Services
- Dr. Scott Clark, CEO and Co-founder, SigOpt
- Prashant Mohan, Product Manager, SmartBear
- Sarah Lahav, CEO, SysAid Technologies
- Antony Edwards, CTO, Eggplant
- Wayne Ariola, CMO, Tricentis
- Eric Sheridan, Chief Scientist, WhiteHat Security
- Roman Shaposhnik, Co-founder V.P. Product and Strategy, Zededa
Opinions expressed by DZone contributors are their own.