Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

A Planning Poker Experiment

DZone's Guide to

A Planning Poker Experiment

Zone Leader, John Vester, performs a secrete planning poker experiment — scoring stories solely on the amount of time that was required to vote for the story.

· Agile Zone ·
Free Resource

You've been hearing a lot about agile software development, get started with the eBook: Agile Product Development from 321 Gang

Quite some time ago, while a member of a feature team was utilizing the Agile methodology, I decided to perform an experiment during the Sprint Grooming sessions. I have intentionally waited to publish this article — originally planned for my personal blog — because I didn't want to reveal the identity of the corporation where my experiment was performed. Like is often the case with true story accounts portrayed in the media, I will assert that "some of the names and surroundings may be changed to protect the innocent."

Background

The team was very focused on using the Agile methodology, respecting the various ceremonies throughout the sprint lifecycle. It was actually quite impressive to see Agile adoption spoken across the enterprise, with full support from the decision-making C-level executives. One of the longest meetings we participated in as a full team, were the grooming sessions, scheduled for two hours during every sprint to pave the way for future sprint workloads.

We utilized a modified Fibonacci sequence to score each story, based upon not only the development challenges, but the level of testing and validation required as well. For scoring estimation valid values for a given story were 0.5, 1, 2, 3, 5, 8 and 13. Actually, a 13-point vote was not allowed, requiring that story to be broken into two at least two stories (e.g. one 5 and one 8), based upon our team rules. A team norm was to confirm that everyone agreed upon the score before moving to the next story. In most cases, obtaining a consensus was not that difficult.

The Experiment

After participating in several grooming sessions, I started to see a pattern without story analysis and scoring. I realized was that there was a direct correlation in the degree of complexity related to the amount of time that was required to explain and discuss the story. As an example, if the product owner was able to explain the story quickly and the conversation around the story within the Sprint team was relatively short, the point value for the story would always fall into the 0.5 and 1.0 range.

Using this analogy, I was able to map out the following table:

Image title

The discussion column indicates the amount of time required to not only explain the story, but to have any subsequent conversation. As I noted above, when the team voted that a story was 13 points in size, the norm was to break down the story into at least two stories. Almost always, this happened after talking about the story for as over 15 minutes. In time, the number of stories which had to be broken down lessened, since 15 minutes is a long time to talk about a single story.

The Results

During the grooming sessions, we were able to keep our laptops open. For my experiment I kept a text-editor open in order to keep track of the story number being scored, the amount of time required to discuss the story, and the final score agreed upon by the team. When I returned to my desk, I would apply the automated score, based upon my table.

Below, are results from one of the scoring sessions I tracked:

Image title

For the grooming session above, eight stories were groomed for a total of 30 points. Using my experiment, the same stories yielded 32.5 points a difference of only 2.5 points (or ~8%). In other examples, where more complex stories were discussed, the difference was actually smaller — with one session matching the team score 100%.

Conclusion

For this particular team, there was a true correlation between the amount of time required to discuss a given story and the complexity of the effort required to complete the necessary tasks. Additionally, the experiment was successful because the team had endured months of grooming sessions — with no turnover among the team members.

This approach is not something can be easily adapted to all Agile teams. In the case of this particular team, ground rules were established and followed which limited the discussion toward the story being groomed. Other teams I have encountered have challenges with keeping the discussion on track, causing a table to become far less effective. Still, it was an interesting observation to link the discussion time with the overall complexity for a given story.

Have a really great day!

Download the free agile tools checklist from 321 Gang. This guide will help you choose the right agile tools to position your team for success. 

Topics:
agile adoption ,planning poker ,experiment ,agile ,prediction experiment ,table values ,scrum

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}