When Do You Stop Testing?
You stop testing when there are no more bugs, right? If so, you're doing something fundamentally wrong.
Join the DZone community and get the full member experience.
Join For Freethere is a software to be tested. there is a team of testers. there is some money in the budget. there is some time in the schedule. we start right now. testers are trying to break the product: finding bugs, reporting bugs, communicating with programmers when necessary, and doing their best to find what's wrong. eventually they stop and say "we're done." how do they know when to stop? when has there been enough testing? it's obvious — when there are no more bugs left and the product can be shipped! if you think like this, i have bad news for you. you're fundamentally wrong .
all this is perfectly explained by glenford myers in his great book the art of software testing . i will just summarize it here again.
first, "testing is the process of executing a program with the intent of finding errors " (page 6). pay attention, the intent is to find errors. not to prove that the product works fine, but to prove that it doesn't work as intended. the goal of any tester is to show how the product can be broken, how it fails on different inputs, how it crashes under stress, how it misunderstands the user, how it doesn't satisfy the requirements. this is why dr. myers is calling testing "a destructive, even sadistic, process" (page 6). this is what most testers don't understand.
second, any software has an unlimited amount of bugs . dr. myers says that "you cannot test a program to guarantee that it is error free" (page 10) and that "it is impractical, often impossible, to find all the errros in a program" (page 8). this is also what most testers don't understand. they believe that there is a limited number of bugs, which they have to find and call it a day. there literally no limit! the amount of bugs is unlimited, in any software product. no matter how small or big, complex or simple, new or old is the product.
having these axioms in mind, let's try to decide when testers have to stop. according to dr. meyers, "one of the most difficult questions to answer when testing a program is determining when to stop, since there is no way of knowing if the error just detected is the last remaining error" (page 135).
they can't find all bugs, no matter how much time we give them. and they are motivated to find more and more of them. but at some point of time we must make a decision and release the product. looks like we will release it with bugs inside? yes, indeed! we will release a product full of bugs . the only question is how many of them were found already and how critical they were.
let's put it all together. there are too many bugs to be able to find all of them in a reasonable amount of time. however, we have to release a new version, sooner or later. at the same time, testers will always tell us that there are more bugs there and they can find more, just need more time. what to do?
dr. meyers says that "since the goal of testing is to find errors, why not make the completion criterion the detection of some predefined number of errors?" (page 136). indeed, we should predict how many bugs are just enough to find, in order to have a desirable level of confidence that the product is ready to be shipped. then, ship it, conciously understanding that it still has an unlimited amount of not yet discovered bugs.
david west in object thinking says that "software is released for use, not when it is known to be correct, but when the rate of discovering errors slows down to one that management considers acceptable" (page 13).
thus, the only valid criteria for exiting a testing process is the discovery of a forecasted amount of bugs.
Published at DZone with permission of Yegor Bugayenko. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments