Over a million developers have joined DZone.

A Big Estimate is Not a Sum of Small Estimates

DZone's Guide to

A Big Estimate is Not a Sum of Small Estimates

· Agile Zone ·
Free Resource

Adopting a DevOps practice starts with understanding where you are in the implementation journey. Download the DevOps Transformation Roadmap. Brought to you in partnership with Techtown.

I’m working with a client that has multiple, non-collocated component teams working on one project. It’s not my ideal situation, but we’re making the best we can of the situation.

We built a story map of business-oriented, project-level “epics.” These have been prioritized within business themes, and have been tentatively scheduled for development releases. The early ones have been estimated with level of effort (LOE). Basically these LOEs are Small, Medium, and Large, but given numeric scores to allow tracking project progress toward development releases from a business point of view using a burnup chart.

These project-level “epics” are broken down into component level “stories” for development. The component stories have their own acceptance criteria at the component boundaries, and are estimated by the component team doing the work. These estimates necessarily don’t use the same “units” as the business level estimates. There’s no way to make the estimates use a common “point” from team to team, much less comparable to the high-level ones. The component story estimates are used for tracking progress within each team’s sprint.

It’s not the most highly-tuned Agile process, but it’s pretty darn good for a project transitioning to Agile in a large organization used to a highly controlled, serial lifecycle. It’s reasonable, and it’s theirs.

So where’s the rub? They’re also using a well-known “Agile Lifecycle Management” tool. Remember, this is a distributed project. Also, the Quality Office, accustomed to that highly controlled, serial lifecycle, demands lots of documentation.

We started putting the epics and stories into this tool. Determined not to let the tool dictate the process, we ignored that it wanted us to estimate task-hours. We assigned the stories as children to the epics. When we did so, the tool deleted our epic estimate, and replaced it with the sum of the story estimates. This gives us a lot more precision—we’ve got way more sizes than Small, Medium, and Large, now—but much less accuracy.

We estimated the fruit salad as a Small and called it 5 points. The tool saw that we were putting 2 pineapples, 6 apples, 3 grapefruit, and 120 blueberries into the salad. Therefore the fruit salad is now sized as 132 fruit. How useful is that?

It reminds me of Dave Nicolette‘s classic post:

How many Elephant Points are there in the veldt? Let’s conduct a poll of the herds. Herd A reports 50,000 kg. Herd B report 84 legs. Herd C reports 92,000 lb. Herd D reports 24 head. Herd E reports 546 elephant sounds per day. Herd F reports elephant skin rgb values of (192, 192, 192). Herd G reports an average height of 11 ft. So, there are 50,000 + 84 + 92,000 + 24 + 546 + 192 + 11 = 142,857 Elephant Points in the veldt. The average herd has 20,408.142857143 Elephant Points. We know this is a useful number because there is a decimal point in it.

So far, we haven’t found a way around this. (Nor for the fact that we can’t set the release for an epic if it has any children attached.) It’s a classic case of the tool trying to dictate the process rather than supporting it.

Take Agile to the next level with DevOps. Learn practical tools and techniques in the three-day DevOps Implementation Boot Camp. Brought to you in partnership with Techtown.


Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}