Disclaimer: I'm thinking this through as much as anything. Don't expect any amazing conclusions, but I'd be interested to hear your thoughts and opinions on this.
Microsoft uses the term "bug bash" (example here — but I first heard them use this a few years ago) to mean to hunt for bugs. Everyone stops what they're doing to try and find bugs for a period of time. The aim is to find as many bugs as possible.
At every company I've known use the term previously, the term meant something different. I'd always heard it used to describe a period of time set aside to try and fix bugs. Any bugs. The aim was to reduce the number of known bugs.
Bash the product until the bugs fall out vs. Bash the actual bugs.
As I start to write this, I realize I haven't asked the internet about this. It seems Wikipedia sides with Microsoft in the definition.
I prefer the other meaning more.
I get it. Hunting for bugs is good. It's also really hard to fix bugs that haven't been found.
But there are problems with how I've seen the Microsoft approach used in practice.
- It can only really happen late in the development process. When it happens earlier, there are issues with separating actual bugs from things that are missing or known to not be finished yet.
- It doesn't guarantee coverage of the app/software. Just because you have lots of different people using/testing the app at once doesn't mean they'll use all of it. In bug bashes I've seen before, they tend to generate a disproportionately large number of bugs in a small number of areas.
- It can be seen as an excuse for no, or minimal, beta testing. Also, note that beta testing should be monitored with usage analytics to ensure appropriate coverage.
- They can easily value quantity over quality. When the aim is on finding bugs in a restricted period of time, the amount of effort put into writing a good bug report is often reduced.
- The quality of bug reports often isn't as high from a random person on the team when compared with a professional tester. Poor quality bugs cost more to verify, investigate, and fix, so it's worth the time and effort to get a good bug report.
- Even professional testers slip into bad habits and raise low-quality bugs.
- It ignores automated testing.
- The details of what's tested aren't correlated with any test plans. This means that in addition to the uncertainty of code coverage (as above), no details of things that are tested as part of the "bash" are recorded. This means that gaps in test plans aren't identified or filled unless a bug happens to be reported for that area.
Why bashing actual bugs is better:
- Getting overall bug counts down is good. It enables you to "see the wood for the trees." With fewer known bugs you can prioritize better and work on new features. You also avoid duplicate bugs.
- Bugs get fixed sooner. And fixing bugs sooner is cheaper.
- It is an easy way to give the impression and feeling of progress. While trivial bugs may be fixed, having the number of outstanding bugs go down can be good for morale.
- They encourage actually fixing bugs. It can be tempting for many teams to consider known bugs to be of lower priority than new work. Fixing known bugs before adding or changing features stops this and avoids a system full of known bugs.
- Gives autonomy to developers to chose what they work on and can even allow them to fix bugs in their favorite parts of the code or pet feature. A culture where the fixing of bugs is encouraged leads to better software and happier developer teams.
I still like my approach better.
Is "bug bash" a term you use in your development/test process? How do you use it? What's good or bad about it?