LeadingAgile uses Agile Health Metrics to demonstrate the results of our process improvement efforts and to identify areas that need further improvement. We have many internal documents describing our approach that we share with our clients, but to my surprise, it seems that we have never blogged about it. Here is a high-level view of the metrics we often start with.
When deciding what to measure, the place to start is with a goal. First, ask yourself what outcomes are you after, your goals. Then consider what is needed to meet those goals. And finally, what metrics indicate whether you have what you need. You may recognize this as the Goal-Question-Metric approach.
Our clients tend to care about predictability, early ROI, improved quality, or lower cost. Predictability seems to be paramount. They want teams to get good at making and keeping promises, consistently delivering working, tested, remediated code at the end of each sprint. A team that is not predictable isn’t “bad” – but they aren’t predictable. Without stable predictable teams we can’t have stable predictable programs, particularly when there are multiple dependencies between teams.
This post focuses on metrics for predictability. The goal, then, is:
Teams can plan, coordinate, and deliver predictably enough to make a release level commitment.
Here’s how we break that down:
- Does the team deliver the functionality it intended each sprint?
- Has the team established a stable velocity?
- Does the team frequently deliver working, tested, remediated code?
- Does the team have everything expected each sprint to perform the work?
- Does the team have confidence they will deliver the functionality expected for the release?
We answer these questions with the following metrics:
Story and Point Completion Ratio
- Number of Committed Stories Delivered / Number of Committed Stories
- Number of Committed Points Delivered / Number of Committed Points
This metric helps teams become predicable in their estimating and sprint planning. It encourages smaller stories and more effort getting work ready prior to the sprint. We like to see delivered points and stories to be within 10% of the commitment.
Velocity and Throughput Variation
- Recent Velocity / Average Velocity
- Recent Throughput / Average Throughput
This metric helps teams become stable in their performance. This will encourage managing risks and dependencies ahead of the sprints, and not over committing within the sprint. We like to see recent velocity be within 20% of average. We also want to see a reduction in the standard deviation of the velocity over time.
- WIP to Throughput Ratio
Building a large inventory of untested code typically increases the costs and time associated with fixing defects. This in turn increases the costs and challenges associated with version control, dependency management, and the delivery of working, tested, remediated code. Our objective is to improve lead-time and to deliver frequently. There should not be more than 4 weeks worth of throughput active in a team from Ready to Delivered. Less is better. We like to see 2 weeks or less.
Team-member Availability Ratio
- Headcount available / Headcount expected
We need an indication when planned team-members aren’t available. Stability is critical for teams to be able to make and keep release commitments. When people are pulled across multiple teams – or are not available as planned – it is unlikely that the team will be able to deliver predictably. We like to see this be within 10% of plan.
Use the team’s insight and record of performance to evaluate the team’s confidence that the release objectives can be achieved. This metric is useful for planning and commitment purposes. Release Confidence is a consensus vote where 1 is no confidence and 5 is very confident. If a team has heavy dependencies, they should include a vote from the Agile Project Manager of the team handling the dependencies. If the team is missing a skill or if a role is unfilled, the team should take into account the likely impact to release success. Support this metric with a release burn-up.
That’s just a taste of the metrics we use for predictability. We also use quality indicators like build frequency, broken builds, code coverage, defect rates or technical debt. Likewise, for Product Owners we are interested in things like major initiatives, features remaining, features released, size of release cycle, and more. And for value, we are interested in things like time to value.
Using metrics responsibly provides insight across the organization to understand the organization’s ability to meet expectations. These metrics help establish a shared understanding of the respective capabilities of the teams, and guidance for improvement efforts.