Velocity Anti-Patterns - Attempts to Show Increased Velocity
Velocity Anti-Patterns - Attempts to Show Increased Velocity
Increasing pressure on teams to increase velocity can lead to practices that don't deliver any value and defeat the purpose.
Join the DZone community and get the full member experience.Join For Free
Discover how TDM Is Essential To Achieving Quality At Speed For Agile, DevOps, And Continuous Delivery. Brought to you in partnership with CA Technologies.
When leadership asks for an increase in velocity, there are a few common behaviors that occur. Each of them is an attempt to satisfy the potentially unrealistic ask.
It is intriguing to me how often a manager will make a change such as this to a system of work and then later proclaim that the team is gaming the system. This is simply not the case. In fact, the gaming of the system is the improper application of targets or goals for lagging indicators. The rest is just natural consequence.
The following are but a few examples of what happens when a manager games the system.
A few years back, I was working with an organization where teams were achieving the expected increase in velocity, but leadership didn't feel like things were moving any faster. Selecting one team, we looked at their history and found that their velocity had gone from an average of about 20 points to an average of about 40. On a hunch, we ran a report to figure out the average story size for each iteration. Sure enough, the average story size had gone from around 1.4 to around 3.1. In fact, you could see a punctuated increase in average size followed by a slow but steady incline. The punctuated increase came right around the time leadership introduced the new burn up charts and put a focus on velocity.
Looking at the average number of stories completed per iteration, the numbers were telling. With an average velocity of 20 at 1.4 points per story, they were completing around 14 stories per iter- ation. With an average velocity of 40 at 3.1 points per story, they were completing around 13 stories per iteration.
Were they, in fact, moving faster and the stories were coincidentally larger? How could we know for sure?
We took an evenly distributed sampling of stories across the history of the project and printed them out without any sizing information. We then asked the team to size the stories using the same techniques they'd always used. The average story size came out to be approximately 2.8 with earlier stories growing from an average of 1.4 to 2.6.
Had they been gaming the system? No. As we've already discussed; we game the system when we change it, the resulting behaviors are just natural consequence. We can't know for sure if there was, at one time, a deliberate increase in story sizes, but we can say that under the given conditions, the team genuinely believed the larger numbers were more accurate.
Splitting points refers to two possible activities: taking partial credit for work picked up in a given iteration, but not completed, or breaking stories up into smaller pieces that add up to more than the original.
Taking Partial Credit
This isn't really a way of making our velocity look greater, but it is a way of making our velocity look "better," so I've added it here.
Say we have an 8 point story that we pick up mid-iteration and don't complete. At the end of the iteration, we award ourselves 3 points and roll 5 points into the next iteration. The intent, I believe, is to be able to represent the effort expended in each iteration. This allows for a smoother looking burn-down chart across a release. And that means fewer people are asking questions.
Of course, if we're not actually delivering value in a given iteration, we probably shouldn't be taking partial credit based on "effort." And if this is happening consistently, maybe those people should be asking questions.
And it is not uncommon for teams that engage in this practice to do it repeatedly. I've seen plenty of teams that split and roll. At the end of an iteration, they split all stories in process and roll the remainders into the next iteration. In some cases, such as when they use an electronic tracking system, they actually create a duplicate card, split the points, and move the new card into the next iteration.
I've also seen this happen across multiple iterations. An 8 point card gets split into 3 done and 5 remaining. That 5 gets split into 2 done and 3 remaining which gets split into 2 and 1. Four iterations to complete an 8 point story. And we've lost all visibility of the lead time and cycle times on that story.
Burn charts look good. Velocity seems pretty consistent. Bonuses will be paid to the Scrum Masters. And yet, we've no idea when the work in this iteration will actually be done. We're foolish enough to proclaim that we consistently get 12 points done per iteration, so this 5 point story should easily get done next iteration. We're ignoring the fact that the work completed in a given iteration was actually started in prior iterations and that odds are very low we'll ever finish 5 points on a card in a single iteration.
Breaking Up Stories
This one is interesting because it is often done for the wrong reasons, but sometimes it works.
Some teams will take a large story and break it into smaller pieces in order to make it look bigger. Take a team that estimates on a Fibonacci scale where stories are sized as 0, 1, 2, 3, 5, 8, 13, 21, and so on. A 5 point story gets broken into three pieces, each estimated at 2 points. Viola, 5 points becomes 6. Nice. They couldn't convince management it was an 8 (thanks Fibonacci), but through some slight of hand, they bumped it to 6. More Velocities, here we come.
Seems a tad dubious, does it not?
But it can actually work. And I don't mean it can work because it artificially increases your velocity, which it does. I mean it can work because you get more work done in less time.
Assume that stories are split along reasonable seams into chunks of work that can be individually delivered. Now the team has increased the likelihood they can both deliver value in this iteration and improve their throughput.
In some environments I've coached, we've seen an interesting outcome when stories get broken down into smaller slices. We found that when we broke large stories into discrete parts, the average time to deliver all discrete parts was less than the time to deliver the larger story. This held true even when the parts added up to more than the story. For more on this, you can read the section on correlations, we discuss Lead Time by Story Size.
If you're going to split stories, do it for the right reasons. Do it because you want to optimize for learning as you go. Do it because you want improved forecasting. Do it because you want a more consistent delivery cadence. Do it because you want to get working software into the hands of your customers as soon as possible.
And a side effect will be that your actual throughput goes up.
You can find more on the topic of velocity and metrics for agile teams in my book, " Escape Velocity".
Published at DZone with permission of Doc Norton , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.