DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

How does AI transform chaos engineering from an experiment into a critical capability? Learn how to effectively operationalize the chaos.

Data quality isn't just a technical issue: It impacts an organization's compliance, operational efficiency, and customer satisfaction.

Are you a front-end or full-stack developer frustrated by front-end distractions? Learn to move forward with tooling and clear boundaries.

Developer Experience: Demand to support engineering teams has risen, and there is a shift from traditional DevOps to workflow improvements.

Related

  • Did We Build The Right Thing? That's What UAT Is About.
  • Importance of Big Data in Software Testing
  • Impact Of ChatGPT on Software Testing Companies
  • Specification by Example Is Not a Test Framework

Trending

  • Designing Scalable Multi-Agent AI Systems: Leveraging Domain-Driven Design and Event Storming
  • Misunderstanding Agile: Bridging The Gap With A Kaizen Mindset
  • Why Whole-Document Sentiment Analysis Fails and How Section-Level Scoring Fixes It
  • Top Trends for Data Streaming With Apache Kafka and Flink
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Testing, Tools, and Frameworks
  4. When Do You Stop Testing?

When Do You Stop Testing?

You stop testing when there are no more bugs, right? If so, you're doing something fundamentally wrong.

By 
Yegor Bugayenko user avatar
Yegor Bugayenko
·
Sep. 14, 15 · Opinion
Likes (3)
Comment
Save
Tweet
Share
6.6K Views

Join the DZone community and get the full member experience.

Join For Free

there is a software to be tested. there is a team of testers. there is some money in the budget. there is some time in the schedule. we start right now. testers are trying to break the product: finding bugs, reporting bugs, communicating with programmers when necessary, and doing their best to find what's wrong. eventually they stop and say "we're done." how do they know when to stop? when has there been enough testing? it's obvious — when there are  no more bugs left  and the product can be shipped! if you think like this, i have bad news for you. you're  fundamentally wrong  .

all this is perfectly explained by glenford myers in his great book  the art of software testing  . i will just summarize it here again.

badge

first, "testing is the process of executing a program with the intent of  finding errors  " (page 6). pay attention, the intent is to find errors. not to prove that the product works fine, but to prove that it  doesn't work  as intended. the goal of any tester is to show how the product can be broken, how it fails on different inputs, how it crashes under stress, how it misunderstands the user, how it doesn't satisfy the requirements. this is why dr. myers is calling testing "a destructive, even sadistic, process" (page 6). this is what most testers don't understand.

second, any software has an  unlimited amount of bugs  . dr. myers says that "you cannot test a program to guarantee that it is error free" (page 10) and that "it is impractical, often impossible, to find all the errros in a program" (page 8). this is also what most testers don't understand. they believe that there is a limited number of bugs, which they have to find and call it a day. there literally no limit! the amount of bugs is unlimited, in any software product. no matter how small or big, complex or simple, new or old is the product.

having these axioms in mind, let's try to decide when testers have to stop. according to dr. meyers, "one of the most difficult questions to answer when testing a program is determining when to stop, since there is no way of knowing if the error just detected is the last remaining error" (page 135).

they can't find all bugs, no matter how much time we give them. and they are motivated to find more and more of them. but at some point of time we must make a decision and release the product. looks like we will release it with bugs inside? yes, indeed! we will release a product  full of bugs  . the only question is how many of them were found already and how critical they were.

let's put it all together. there are too many bugs to be able to find all of them in a reasonable amount of time. however, we have to release a new version, sooner or later. at the same time, testers will always tell us that there are more bugs there and they can find more, just need more time. what to do?

dr. meyers says that "since the goal of testing is to find errors, why not make the completion criterion the detection of some predefined number of errors?" (page 136). indeed, we should predict how many bugs are just enough to find, in order to have a desirable level of  confidence  that the product is ready to be shipped. then, ship it, conciously understanding that it still has an unlimited amount of not yet discovered bugs.

badge

david west in  object thinking  says that "software is released for use, not when it is known to be correct, but when the rate of discovering errors slows down to one that management considers acceptable" (page 13).

thus, the only valid criteria for exiting a testing process is the discovery of a  forecasted  amount of bugs.

Release (computing) Software Software testing Intent (military) teams Testing Programmer (hardware) IT Crash (computing)

Published at DZone with permission of Yegor Bugayenko. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Did We Build The Right Thing? That's What UAT Is About.
  • Importance of Big Data in Software Testing
  • Impact Of ChatGPT on Software Testing Companies
  • Specification by Example Is Not a Test Framework

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: