A few months ago, the webinar “Learn Faster: Deploy what you have – Virtualize what you don’t” discussed the dissatisfaction testers had with the completeness of their testing. In a survey run by IBM of 250 senior testers, less than 10% of testers had complete confidence in the software that was released.
Tester Confidence in Software Being Released
Part of our message in that webinar was that with more rapid feedback, you could learn faster and get more confidence in your releases. One might expect that as software methods and technologies evolve, confidence would tend to improve.
However, I suspect that the correlations between methods/tools and confidence are pretty weak. Time to market is key, and teams try to balance quality and time to market. Better approaches to testing and driving quality may not drive higher quality software at release time, but rather bring forward the release.
Testers may find this frustrating – their personalities naturally skew towards caring about not shipping broken stuff. But quality departments are increasingly viewing their job not as certifying that the software is great, but as giving the business a good estimate of the risks of problems at release time.
How would this work for your teams? If they got better or faster at testing, would your software quality benefit most, or would the business have a consistent appetite for risk and choose to get to market faster?