DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Testing, Tools, and Frameworks
  4. Value-Based Test Prioritization

Value-Based Test Prioritization

The art of software testing is to find a right balance that maximizes the outcome of testing within a given schedule time, budget, and other constraints.

Vu Nguyen user avatar by
Vu Nguyen
·
Dec. 29, 16 · Opinion
Like (6)
Save
Tweet
Share
6.89K Views

Join the DZone community and get the full member experience.

Join For Free

We all know that it is impossible to perform exhausted testing in practice even with a small software application. The reason is that we have limited schedule time and budget, while the number of tests is almost unlimited. The art of software testing is to find a right balance that maximizes the outcome of testing within a given schedule time, budget, and other constraints. 

In this post, we will discuss a method that helps find such a right balance. 

First, let’s consider a few common scenarios that you as a tester may encounter. One scenario is that you have 1,000 test cases to be written and tested for a one-month iteration or sprint. Your project produces many builds during the iteration. Your testing team has three people, which is barely enough for testing every test case once. Because your manpower is limited, you have to decide which test cases to test for which build and which test cases have to be tested repeatedly on multiple builds. For another scenario, in a later stage of the project, your customer requests to test a build to release, and you have only five days to test 2,000 test cases. What would you do if you don’t have enough manpower to test all of the test cases in five days?

Value of Testing

Vilfredo Pareto, an Italian economist, developed a popular principle bearing his name (the Pareto principle) by observing that roughly 80% of the land in Italy was owned by 20% of the population. This principle has been shown to be valid in many phenomena and it has been applied widely in business and social sciences. 

Using the Pareto principle, we can say that roughly 80% of software defects come from 20% of the modules or features or test cases, meaning that 20% of test cases account for 80% of the value of testing. The implication of the Pareto principle is that some test cases are more valuable than others and a small number of test cases account for the majority of the value of testing. 

Given the constrained schedule time and budget, we maximize the outcome of testing by focusing on the test cases that bring the most value for customers. This becomes a problem of identifying these most valuable test cases (see the figure below).

Image title

Figure 1.

This chart shows the difference in value gained by two testing approaches, value-neutral and value-based. The value-neutral approach treats every test equally important. Thus, to achieve 80% of value, we have to perform 80% of the tests. On the other hand, with the value-based approach, we can achieve 80% of value with 20% of the tests.

Two major goals of software testing are detecting defects and establishing confidence in the quality of the product. For the former goal, the testing team tries to find as many defects as possible in the software so that they can be fixed in later releases. For the latter goal, the team tests the software functionality to certify whether the software is ready for release or decide how much more development is needed to release the product. 

The value of testing lies much on satisfying these major goals. 

We should note that these objectives are not always equally important; their importance is dependent on when testing is performed and for what purpose. When customers want to demonstrate or release the product, having confidence on it is more desirable than having defects. Detecting defects is important in the early stages while establishing confidence is crucial in the later stages of development or when teams deliver product to customers. You don’t want to have low confidence in the quality and stability of the product when sending it to customers and end users. 

Prioritizing Test Cases Using Risks

Given a set of test cases, we prioritize using their value that we achieve after performing testing. The value here is a general term referring to the number of defects detected, time reduced, effort saved, potential problems avoided, or confidence gained after testing. Prioritizing does not mean that you don’t need to test all test cases, but that you should focus more on high-priority ones, especially when you are under pressure to complete testing within short given time. 

Suppose that we have a set of test cases that are linked to requirements. The dependencies among test cases are also specified. 

Step 1: Determine Risk Exposure (RE) for Each Test Case

RE = Chance of Defect x Effect.

Risk exposure measures the level of risk of the test case. Chance of Defect is the probability of defects found when executing the test case, ranging from 0% to 100%. Effect is the incurring loss if defects go undetected. Effect can be quantified as the effort spent for handling the problem when defects were found later. Chance and Effect are subjective measures that can be specified using experience and judgment. 

The Chance measure likely depends on the developer and the feature. If there are defects found from the feature in the previous builds, then the chance that a defect is found again is high. If a developer whose code has many defects, then the test case associated with his code has a high Chance of Defects.  

The Effect measure is likely affected by the severity of defects and the importance of the associated functionality. Effect would be high when a serious defect occurs in a core feature, and it would be low with trivial defects. 

When determining Chance, we should ask the question, “What is the possibility of having a defect if executing the test case thoroughly?” When specifying Effect, we should ask the question, “How serious is the defect?” 

Step 2: Calculate Risk Reduction Leverage (RRL) for Each Test Case

RRL = (RE before Testing – RE after Testing)/Effort.

RE before Testing and RE after Testing are the risk exposures before and after the test case is exercised, respectively. RE after Testing is likely smaller than RE before Testing because the chance of finding defects is reduced after exercising test. 

Effort is the amount of time spent for running test and related activities such as preparing the environment for testing and reporting defects. 

Step 3: Prioritize the Test Cases According to Their RRL Value

The higher is the RRL value, the more value the test case has. 

Image title

Table 1. Test Case Prioritization.

Table 1 provides an example of prioritizing three simple test cases by determining RE and RRL of each test case. Chance is measured in percentage, and Effect and Effort in person-hour, while RE and RRL have no unit. The priority of the test cases is specified using the RRL values.  

Applying the Method

There are several concerns with the method that you may correctly have. One is that the method requires doing several estimates using experience, which is difficult to do correctly. It would also take much time to estimate each of thousands of fine-grained test cases in even small projects. However, we can simplify the method to make it more applicable in practice. There are a couple of ways to do this.

  • Ignore the risk after testing. We prioritize the test cases by only considering the risk before testing.

  • Classify test cases into groups based on the associated functionality or module. You can then determine RRL of a representative test case of each group. Every test case in a group has the same priority, but you can always adjust the priority of some test cases if needed. 

Conclusion

There are many approaches to help find the right balance depending on your testing skills, experience, methods applied, and support tools. This post provides a useful method, giving you not only the specific way but also an idea for maximizing your testing outcome. A takeaway of this post is that not every test is equally important. They should be treated differently to generate the best value possible for your team and customers. To do so, you can use your instinct judgment and experience; you can also use a constructive method like the one discussed in this blog. 

Testing Test case

Published at DZone with permission of Vu Nguyen. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Application Architecture Design Principles
  • NoSQL vs SQL: What, Where, and How
  • Spring Boot, Quarkus, or Micronaut?
  • Introduction to Containerization

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: