Don't Be Average
Don't Be Average
A Scrum team that plans iterations and Sprints that challenge them and have the possibility of failure, will, in the end, be the most successful.
Join the DZone community and get the full member experience.Join For Free
You've been hearing a lot about agile software development, get started with the eBook: Agile Product Development from 321 Gang.
A common thing Agile teams are asked to do is plan, given historical performance, how much work they think they can reasonably commit to in a fixed increment of time. I'm going to tell you that in most cases, this is the most destructive thing you can do to a team, a product and your business.
Why? Because you are passively planning to be mediocre.
Betting on Averages
For the sake of this example, let's say this is a Scrum team. Nearly everyone has had some involvement in that world so it makes for good examples, but I've seen similar things in a lot of different contexts and environments and the result is nearly always the same. Our Scrum team came into work on day one of the iteration and headed straight into a Sprint Planning meeting/ceremony/whatever. Not a great idea, but really common.
Traditionally, that meeting is going to be split into at least a couple of sessions. The first session is centered around what the team can do and the second session is about how they're going to go about doing it. Great teams usually have a third session, which I'll come to later. The piece I'm focussing on right now is that first session. How much work can you commit to doing?
Here's how that usually plays out.
Capacity Based Planning
The team will look at their current 'Velocity.' Velocity actually means 'speed in a given direction' but to most business folk, it just means speed. That's okay, we forgive them.
Let's say this team has bi-weekly iterations with a velocity of 50 Story Points. How did they arrive at that number?
For this, and for many teams, the Scrum Master will produce the Velocity by calculating the average number of story points they have managed to complete in the last N iterations. N in this example is the last ten iterations but it could be any number up to the number of iterations on record; it really doesn't matter because taken on their own, averages are absolutely useless.
Averages Are Useless...
Let's use an extreme example to neatly demonstrate my point and do some super-simple math.
Our team has completed the following number of Story Points in the last ten iterations.
20, 20, 20, 20, 20, 80, 80, 80, 80, 80.
Our Velocity, which is the number we were about to use to predict our performance, is 50 Story Points. That number hasn't occurred once in the entire history of this team. Not even once!
If you look carefully, you'll notice that exactly half of the iterations were well under the average and exactly half were well over. Why is that?
You Are Flipping a Coin
The reason we see that is because, given an even distribution (it's far less predictable than that, see the ludic fallacy for more info), the average and by extension Velocity, represents a value which has a 50% likelihood of occurring.
Does 50% sound like a solid 'commitment' to you? I didn't think so. It gets much worse than that I'm afraid.
In most cases, that team would never have gotten past the original 20 Story Points they started out with. Committing to more than they had ever achieved before would be foolish right? You might even suggest that a good Scrum Master would discourage the team from taking on too much.
Once in a while, if they were super-confident about the work that was in front of them they might take on 25 or 30 Story Points but it's rare enough to not sway the average.
That's particularly true when velocity is calculated over a long time which is very common. You might think that having a larger sample size will produce more accurate results but it's actually the opposite. The average becomes more incorrect over time.
Or rather, it should. What happens more often is something even more destructive.
We Become Predictably Slow
I want to be clear that culture plays an enormous role in what I'm about to say, but for a lot of larger organizations (and some smaller ones), this is generally true.
When we use the word 'commitment,' we want to be damn sure we're going to deliver. Not 50%. Not even 75%. But as near to 100% as possible.
As it turns out, we can do that. We can be pretty much sure we'll turn up with the goods. That generally happens in one or more, of three ways.
- Overestimate the work.
- Use statistical models to provide high confidence.
- Do less.
In overestimating work, which is fantastically easy to do in software development, we guarantee that we're not over-challenged. What we're really doing is saying we have very low confidence in our ability to do the work on time. We're saying there is probably a whole bunch of things we either don't know enough about to plan for sufficiently or will be unable to complete due to outside influences like relying on other teams for a part of the work.
If we're really clever, we might use statistical process control and understand the variance of our velocity. By calculating the Standard Deviation, we're able to give a number which is, for one x StdDev (σ) 68% likely, two x σ 95% likely, and three x σ 99% likely. Sounds great right?
Unfortunately, unless you're actively normalizing the size of the incoming work, the variance in software development is so large that to be 99% confident, you end up saying that you'll complete between 5 and 500 Story Points. Not so useful!
The final and most common, albeit subconscious, approach is to simply do less. That will naturally drag the average, and thusly expectations, downward. We can definitely go very slowly in every iteration. We did what we said we'd do. Everyone's happy. Christmas bonuses all round!
Is that why we got into this game? Ugh!
You might be in or work with a team that operates a bit like this. So how can you fix it? There are a couple of options, depending on what context you're in.
Option 1: First Things First
If you're thinking this is just another #noestimates post, you're close but not completely right.
Estimation can be a very useful technique if and only if, the result will inform an important decision that is genuinely time sensitive. Situations like that really exist.
You might be creating software in a strictly regulated environment like finance where late delivery attracts monetary penalties. You might also be creating software for something which operates on a seasonal or fixed schedule like sports, for example. Kanbanites model risk profiles like that using Classes of Services. If you've not come across it, Google it now.
If you live in that world, release planning to guarantee compliance is business critical. So too are things like continuous delivery. Probably even more so because showing up late to the party, with faulty and incomplete software, won't go down well.
Ask me how I know!
You're going to want to make sure you deliver the most important (MVP, MMF) features first, taking the calendar of important events into consideration, into production, as soon as conceivably possible. Hell, it ought to be File -> New Project -> Git Push -> Deploy to Production on day one!
That will help build trust and develop a relationship with your clients/sponsors/stakeholders that will give you latitude over scope later in the project.
Talk is cheap. Ship the code!
Option 2: Steer With a Compass, Not a Speedometer
Let's revisit our team and the missing third session from Sprint Planning. The third session will set the all-important Sprint Goal. It is the single most underrated aspect of Scrum, in my opinion.
The Sprint Goal is the 'why' you use to guide the 'what' and 'how' you decided on in the first two sessions.
Here's the thing though. If our estimations aren't going to result in a different Sprint Backlog, regardless of how big we say things are, why should we care? The most important things remain important. We're going to do them anyway.
Don't predict. Don't estimate. It's a crappy proxy for real progress and real success. It is completely wasteful and it's slowing you down.
Developers, in general, want to do what they say they could do. People don't like to fail, but if we discretely define success in terms other than time, we can avoid the issue entirely. Know your True North.
Agility means being as close to change as you can comfortably bear. You can keep almost all of your promises if you make them a couple of weeks in advance instead of months in advance. You can keep them all if you are delivering continuously.
Those who know me well, understand how little I care for attempting to predict the future. There are lots of reasons why I believe it is at best a distraction and has a mostly negative impact on businesses and social groups of all kinds.
If you only ever attempt to deliver what is comfortable, you will become irrelevant. If you celebrate successful mediocrity you run the risk of losing your best people to bolder, less risk-averse organizations that really engage, really innovate, and really challenge their employees to do their best.
Or, you could focus on what's important right now and stop wasting time. There are only two kinds of priority: now or later. The priorities can change as the world you live in evolves around you.
Predicting the future is a zero-sum game in a knowledge economy. Take risks. Fail. Stand for something. Aim to be the best. Get good at managing active risks and don't just avoid them completely because that's where the big wins are. Kill bad ideas quickly and nurture good ones. Know the difference, concretely.
Oh, and have fun whilst you're at it!
Published at DZone with permission of David Huntley . See the original article here.
Opinions expressed by DZone contributors are their own.