Understanding Software Quality Metrics With Manual and Automated Testing
Understanding software quality metrics, especially in automated testing, helps us identify what is working well and what needs improvement.
Join the DZone community and get the full member experience.Join For Free
Quality is the true measure of product success. Poor user experience or application performance negates any advantages you achieve in delivery speed or production cost. Put simply, if it does not work, it is not worth producing.
It is, therefore, critical to our product success that we can accurately measure and track test results to ensure our testing is delivering against our business goals. Understanding software quality metrics, especially in automated testing, helps us identify what is working well and what needs improvement.
You may also like: Code Review for Software Quality
The right software quality metrics enhance and optimize QA testing to ensure it is delivering value both to the business and to the engineering teams. You improve your test results by measuring the test process. If you want to ship a high-quality product on time, you need the correct software quality metrics and the right balance of manual and automated testing.
Identify the Correct Software Quality Metrics
When done correctly test automation is one of the biggest drivers of value within the SDLC. It operates at speeds and with accuracy far beyond those of manual testing. It reduces the spend on engineer hours, it improves test coverage and it is mobile, repeatable and easy to maintain.
Developers rely on test automation to verify their code and make sure it will not break a build. They want an automation framework that can be used throughout the life cycle of the product, accessed with every build.
Your software quality metrics are there to confirm your test automation is reaching these ambitions.
There are dozens if not hundreds of metrics you can apply, but the following will provide insight into how your testing provides value in terms of ROI and build quality.
Defect Status on Priority
Priority status is based on core business requirements. Obviously, the higher the status, the more important it is that the QA team test early. Measuring defect status on priority counts the number of defects identified within high-value elements and their status: closed versus open or reopen. This reflects the overall quality of the software.
This is a simple way of identifying a feature’s stability. The number of defects per feature helps in identifying any problematic areas of the application. This helps the team to monitor and put special attention on features with high defect density during the testing cycle to ensure a smooth release.
Test Case Execution Matrix
A direct measure was undertaken by the QA team of the number of test cases to be executed per suite against the product release schedule — with time subject to environment and judged against the pass, fail, blocked outcome.
Test Case Traceability Matrix
This document or matrix tracks the business requirements to be tested by the end of the production cycle. It is a living account of functionality and feature testing as it evolves.
Defects Count vs Features
Defects in application code will prevent the product from performing as required. It is an error that needs to be corrected before the application can proceed. This metric measures the stability of a feature based on the number of defects found within the development code.
Defects Burn Down Chart
The dramatic language belies a simple concept. The metric is basically a measure of how much work is left to do on a product, stated against time. It is another live measure — this time of the number of defects discovered and the time it has or the time it will take to fix them.
Another key progress indicator, automation coverage reveals how many lines of code have been automated against the total number that can be automated within a suite. It is a measurement of the efficiency of your test automation rather than the quality standards rated in the defect metrics.
This is a key metric used in Agile environments where testing is broken down into short sprints. It indicates the production speed by measuring test case execution against automated test case creation. More than measuring the speed of the QA engineers, it reveals the suitability of the code to automation testing and how testing is impacted by the number and severity of discovered defects.
The critical metric for measuring the value of automation over manual testing when establishing a test budget. This describes the resource and time savings available by implementing automation, despite the relative increase in upfront cost in comparison to manual testing.
Basically, it defines the time taken to execute automation versus manual testing across a suite. The difference in that time can be translated into cost savings in manual engineering hours.
Automation ROI demonstrates why automation and manual testing must be treated differently within a test suite. Their roles are complementary, but you achieve value only when each is deployed to its strengths.
Software Quality in Automated and Manual Testing
There is some urgency within the QA industry to automate every element of the test phase. Certainly, there are tools available to automate even the UI features that typically benefit from the subjective, human aspect of manual testing. Without a doubt, it is faster and ultimately less expensive to automate UI, but it is possible to achieve 100% test coverage only when automation is complemented by manual testing.
In order to achieve breakeven in automation, you need to be able to save between 30% and 50% of manual effort relative to automation. For example, if automation takes 100 hours, you would want to be saving at least 130 hours of manual effort.
This is the kind of calculation that has to be made during the initial scoping and planning studies. It is also the kind of decision that can be aided by the right outsourced QA expert.
Published at DZone with permission of Vakul Gotra. See the original article here.
Opinions expressed by DZone contributors are their own.