Software Testing Estimation Techniques & Hacks Webinar Recap
Last week, Justin Rohrman and I had the pleasure of running a webinar for QASymphony on Test Estimation Hacks, a deep dive into software testing estimation techniques.
Join the DZone community and get the full member experience.
Join For FreeLast week, Justin Rohrman and I had the pleasure of running a webinar for QASymphony on Test Estimation Hacks, a deep dive into software testing estimation techniques. The webinar was recorded; it’s available for on-demand replay and we’ve provided the slide share below.
Test Estimation Hacks: Tips, Tricks and Tools Webinar from QASymphony
The only drawback from the webinar was we were tight on time and didn’t get to all your questions.
So, let’s summarize the material briefly then get to a few more questions here!
We started the webinar with the problems that estimates are supposed to solve — controlling the schedule, controlling costs, comparison shopping, and governance. There’s a reasonable case to be made that estimates aren’t very good at those four things, called the #NoEstimates movement, but we went the other way, asking how to accomplish those things with estimates.
Then we explained the challenges in test planning. Testing isn’t straightforward; the amount of bugs are outside of our control, and a big change can actually force a team to move backwards in coverage.
The next part of the talk compared various strategies to come up with an estimate, including:
Functional Decomposition
Break the work down into small chunks, estimate each chunk, add up the chunks.
Comparison
Rank this project with comparable projects, then look at their data.
Timeboxed
Spend a pre-planned amount of time on testing.
Prediction
Starts like functional decomposition, but count the actual historical progress and predict the end date.
Guru Method
Ask a single expert
Consensus
Gather estimates from a group of people and explore differences
We ended up recommending a combination of these approaches – do more than one! And explore why the results are different. We also recommend talking in terms of risk and confidence intervals, instead of irrational overconfidence.
And now, on to your bonus questions!
So how do you plan for the unknown?
Is it built in to your estimate — and how do you do it without being accused of padding?Great question, I’ll try to give a hard “math” answer and a soft “people” answer. I’ll keep that hard answer to a paragraph, really. To do it, we need to look at the two pieces of data – the work to be done and the recidivism (failure) rate, to calculate the actual work. Then we look at the data for the how long we actually spend testing, and see if it looks like a bell-curve. If it is a bell-curve, we can look at typical velocity and account for what happens if we are a little faster or slower. That allows us to say “if all the work goes forward as quickly as a typical bad project, then here is where we finish. If it goes forward as the true average, it’s here.”
The key there is that we don’t commit to things beyond our control and we argue from data on past projects.
Now the soft answer – which works in a world where the work is not evenly slice, and we don’t have accurate data. In that world, we give a range. Go ahead and put something someone might call a buffer in at the end, but call it a “strategic risk management reserve” – because that is what it is. When someone wants to take the reserve out, you say “that’s fine, if you will take responsibility for the outcome without the reserve- because without it, we can’t.”
Watch as the reserve magically reappears.
For more advice on this, check out “ Critical Chain”, which explains these concept through a business novel. You’ll be glad you did.
Is there a rule like testing should last 30% of the time of development? or any best practice rule like this?As I mentioned in the webinar, testing is largely dependent on things outside of its control, like the quality of the code before it arrives in test, the problems we find, and the fixing speed. Instead of an externally-suggested percentage, I’d suggest starting with figuring out where you are now, and asking if that rate sounds right, or if it should be improved. As an aside, as a member of the Context-Driven School of Software Testing, I would say there are no best practices; practices are better or worse in a given context.
Wouldn’t it be more effective to simply approach the person you reported to before you go over estimate and renegotiate features and time? rather than just blowing the estimate and wasting company money?One thing we didn’t cover was what to do when you are getting late. Generally, I’m for reporting early and often. Politically, it may be better to declare fewer, larger slips – it’s just dishonest. So when you know you’ll be late, declare a slip, and do it large enough that you shouldn’t need another – a second slip indicates you (the person declaring the slip) either don’t know what is going on or are not in charge.
Of course, you might not be in charge. It is possible the problem is quality and retest cycles – in that case, you should have data.
Finally, hopefully, provide options to cut scope of testing. As we discussed on the webinar, you can probably hit the original deadline by moving the “cut line” of covered tests up – is that okay?
Published at DZone with permission of Matthew Heusser. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments