Analyzing how effective your testing efforts are, and working to improve processes.
We think about delivered software quality, and in turn, the quality of our work, all the time as testers. It’s very important to have a set of quantifiable measurements in place, so that you can gauge the effectiveness of your efforts to ensure that the software under test meets the intended business objectives. While the traditional context of test data analysis mostly deals with the data used during testing, it is important for testers to also focus on analysis of data generated as part of the testing performed by the team. That’s the kind of ‘test data’ that’s often overlooked.
If we want to improve the way we test, then we need to start looking at this data. There are some important insights awaiting us when we start to ask questions, like how much effort was expended by the test team to complete specific tasks, how many hours were worked on each cycle or project, and how close to the estimated timeline did we come?
By analyzing this kind of data we can calculate an R.O.I. on the tools we’re using, figure out how to estimate more accurately in the future, and streamline our processes.
Focusing in the Right Place
A test plan should not be set in stone. If you want to make sure that you’re focusing your limited resources in the right places, then you need to continually reassess your plan. Which functional modules are you finding the most defects in? Is it possible that something you envisioned as low priority should come to the forefront? Can something else be moved onto the backburner? Flexibility is essential if you want to get maximum value from your testing efforts.
You also need to think carefully about where problems might lie. Should you really be focusing your regression on an area where you found a lot of issues, or will you get more value from looking at interdependent areas? If developers have fixed a specific issue in that area, then that function probably works well. It might make more sense to look at related features that haven’t been tested as much. If you gather data on this, you can build a set of rules for your regression testing, so that you focus where you’re likely to find the most problems.
Report on Usability
Testers should always give usability feedback, even if it’s beyond the scope of the functional test they’re conducting. This kind of feedback can be given in a secondary capacity, but it’s vital insight from expert software testers emulating the end users and it can give developers and product owners a new perspective that really helps them to improve the final product.
Gathering usability data from testers and collating it, gives you an idea of where the software needs work. It often reveals low-hanging fruit in terms of easy improvements that can make a big impact on the final quality of the software.
Calculating Return on Investment
How do you know that automation scripts are saving you time? Why is it better for testers to write manual tests in the ALM rather than Word? If you don’t measure the effectiveness of the techniques and tools that you use, then you can’t say for sure that they provide any advantages.
There are times where sophisticated tools might be too complex for the job at hand. You may also find that testers are wasting a lot of time working on complex tweaks to tools that aren’t really fit for the purpose of the tests at hand. Examine how testers are interacting with their tools and what level of manual effort is really involved.
You might find ways to improve interactions, identify alternate strategies, and make major efficiency gains. Often the simple exercise of analyzing your tools and processes will generate solid ideas on how to save time and improve things.
We accept that we can only really improve software quality by measuring the right things, but we fail to apply the same logic to the practices, processes and tools we employ to complete our testing efforts. By assessing our approach and taking action to refocus in the right places, we can realize concrete improvements in efficiency and widen our overall test coverage significantly.
To think of test data analysis too narrowly and just focus directly on software quality is missing a real opportunity to improve the way we test. It just requires a slight change of perspective. Analyze the way you test, make changes and measure their impact, then rinse and repeat. Ultimately, improvements in the way we test will also have a positive impact on final software quality.