7 Ways to Reduce Regression Time Without Loss of Effective Coverage
Time is of the essence in QA - learn methods for speeding things up without losing quality or effective coverage.
Join the DZone community and get the full member experience.
Join For FreeEach experienced QA Engineer knows the taste of champagne on a beer budget. Whenever you have to run a regression test, you find yourself in front of a strict deadline. The majority of projects have late functionality changes, urgent marketing inquiries and tech debt, which delay regression run stated in the release procedure. “Not enough time” are not the words that customers and end users are willing to hear. So what can really be done under such circumstances?
1. Control Device/OS Coverage
If you are not looking forward to going to the dogs with the hangar of devices, here are some hints to experience in practice.
The first one is obvious: use analytics systems to identify which devices are worth spending time for your audience. We were surprised to rebuild our device stack when we stopped using market statistics and focused on our audience. It’s really different from the product to a product even for the apps with 10M+ DAU, despite the law of large numbers. As a result, for each of our clients, we build a specifically fitted pool of devices so there has not been a sniff of generalist approaches.
The second one is OS mixing. Let’s figure out how it works. If you have a suite of regression test cases, split it into the number of parts equal to the number of Operating Systems combinations you need to cover. Particularly, you need to run a single suite in 4 threads and after you finish re-iterate with mixing (opposite to running 4 suites in a single thread). And here is the first magic result: testing time is the same, but the bugs get identified earlier.
The third hint is to shuffle devices inside the OS group. During each iteration, for the same OS we use different device models from the pool. Check the table below to get the combination.
Iteration 1 | Iteration 2 | Iteration 3 | Iteration 4 | |
Test suite part1 | Android 4.4.x (LG G2) |
Android 5.x (Moto G) |
Android 6.x (Galaxy S6) |
Android 7.x (Galaxy S7) |
Test suite part2 | Android 5.x (Nexus 5) |
Android 6.x (Nexus 5x) |
Android 7.x (Google Pixel) |
Android 4.x (Huawei p4) |
Test suite part3 | Android 6.x (HTC One M9) |
Android 7.x (Galaxy S8 Edge) |
Android 4.4.x (Xperia Z) |
Android 5.x (Galaxy S5) |
Test suite part4 | Android 7.x (Galaxy S8) |
Android 4.4.x (HTC One M7) |
Android 5.x (LG G3) |
Android 6.x (Nexus 6) |
2. Rank Your Cases
When you’re trapped in a scheduling hole, it’s good to know where to allocate your efforts in the remaining time. Ideally, you don’t miss on any critical issues in the way.
The way here is to rank the cases, so you run the most valuable. But how to find out the relative value of the case? You can follow Risk-Based paradigm, but it really needs time to invest. We have empirically figured out a shorter way.
The rank of a test case = importance value + frequency value. Importance value (from 1 to 5, where 5 is the highest one) indicates how important that functionality for user/ our business is (weighted average).
Frequency value (from 1 to 5, where 5 is the highest one) indicates how many users use that functionality, and how often (inspect analytics for the conclusion).
|
To come up with test suites, assign the ranks and sort the test cases list by descending rank. Top 10% we call Minimal Acceptance test. Top 40% is Advanced Acceptance test. And Full Regression is the full list correspondingly.
3. Use Appropriate Test Layer for a Target Area
A really obvious thing, but is often forgotten on the phase of test planning (“we’ll come back to this stuff later, let’s begin building test cases, etc”). When a manual QA effort is used to test API between client and backend parts of the application, testing scenarios are huge and time-consuming. Some areas (complex calculations or data flows inside an application, for example) are cheaper to test in white-box paradigm, some areas are best covered with series of beta-tests. That is often cheaper than spending the effort of high-qualified test engineers. The key rule here is to perform test analysis keeping in mind all the toolset available.
4. Combine Regression Test Suites
When I mentioned splitting the test suites into parts for OS mixing, I didn’t bring up your attention to one more benefit of keeping such suites fragmented. When a new code has been merged in, you don’t always need to run Advanced Acceptance test fully. If your testers are able to identify the impact of the changes (affected classes and methods) you can figure out what areas should be regressed. In this scenario having predefined test suites split by functionality is worth investing in.
Below is an example of running a tracking app:
Advanced Acceptance pt1 | Registration/Login/Profile/FTUE | 113 test cases |
Advanced Acceptance pt2 | Track/Log Workout | 198 test cases |
Advanced Acceptance pt3 | Social Sharing/Third-party Integration/Partners | 168 test cases |
Advanced Acceptance pt4 | Settings, Privacy, Challenges | 185 test cases |
Whenever some core privacy features are affected, you should run Advanced Acceptance pt4 test suite. This static approach of predefined test suites divided by the areas is the first step. Let’s have a look how we can truly unveil the full potential of the change impact analysis.
5. Automate the Change Impact Analysis
It’s very useful to trace your black-box test cases to certain code areas (build a white-box basement for black-box tests). In this case, each time manual QA team receives a build for regression, they can also get a test suite marked by the script with the affection level assigned.
Test suite for build_02345b | |||
TC Number | TC Name | Rank | Affection level |
TC2678879 | User is able to change privacy to “Only me” | 7 | Strong |
TC2678880 | Application reflects the privacy changes | 7 | Strong |
TC2678881 | User is able to track workout | 9 | Not affected |
TC2678882 | User is able to share workout in FB | 8 | Medium |
6. Implement CI
There’s a lesson to be learned: the more your team is relieved from repeatable tests, the deeper the testing will be. The most traditional way to cope with a continuous growth of regression testing workload is automation. The ideal flow is when a manual team starts its testing after the automated run is finished. For one of our projects, we use nightly regression runs in the cloud with 1200+ cases completed over the cloud of 30 devices. Passed scenarios are excluded from subsequent manual runs. In terms of Continuous Integration, we have each build tested with a sanity check. And a large test run is scheduled as a nightly job.
7. Use Device Farms to Benefit From Multithreading
Imagine regression tests are run in a short release cycle when we receive builds 5 times a day. Having your automated cases connected to a farm of real devices (we avoid using emulators for testing) is very beneficial in terms of regression testing. The image below demonstrates the related efficiency of automated tests when the proper infrastructure is deployed. There are 2 modes of test running: single and multithreaded.
- We use multithreaded mode when we need to cover a lot of test cases as fast as possible.
- We use single-threaded mode when we need to cover a lot of devices as deep as possible.
Example: Test results achieved in 1 minute
Instead of a Conclusion
The methods listed above often allow us to provide cost-effective solutions for our clients. An average savings rate after all the implementation costs is about 20-30% depending on the project – that’s a lot when you are in the situation of a tight budget, and definitely worth investing in.
Work less, do more.
Opinions expressed by DZone contributors are their own.
Comments