Measure Lead Time for the Business
Measure Lead Time for the Business
Consider using DevOps metrics that focus on deploying code that fulfills business needs and measures the full code lifecycle than just deployment.
Join the DZone community and get the full member experience.Join For Free
When teams adopting DevOps ask me, “What should we measure to know that we’re improving?” I have reflexively rattled off the metrics from Accelerate.
- Deployment frequency (frequency an app is released)
- Lead Time (time from code commit to working in prod)
- Change failure rate (changes requiring a subsequent fix or causing outage)
- Time to restore (typical restoration time when an outage occurs)
These broadly make sense and point to a high degree of automated deployment, testing, and robust monitoring. These tenants of DevOps have been helpful. The State of DevOps Report shows that strong performers in these areas outperform their competition in the market. The reality of that is something I have wagered on and won.
However, the lead time that is measured here is not the one that matters most.
Delivering Business Needs Matters
We often name “The Business” as the supplier of new requirements or feature requests. That is a gross oversimplification as ideas may come from the help desk/support, nearby technology teams, designers evaluating usability, etc.
However oversimplified, thinking of improvements to our technology as serving The Business is excellent. Those ideas from the help desk may point to product problems that raise the cost of support or anger our users. Marketing may have insights into how to improve business impacts or align with new messaging. Regardless of where the idea comes from, we should be prioritizing work that advances the needs of our business.
Business-Centric Lead Times
While timing from code commit to production is a good, simple metric, it does not tell us how long it takes from when a new customer need is identified to when it is delivered. Where DevOps fails, it fails because it has become too focused on better collaboration among technologists and not focused enough on serving the business.
So we must measure lead time from a business-centric perspective. In manufacturing, we might consider lead time from receiving an order to producing and delivering it. Similarly, software teams can measure the time from when an idea (“requirement”) comes to the development team until that idea is delivered. Here “delivered” means working in production for real users.
This measurement is meaningful. Outside groups won’t love your team because of a great continuous delivery pipeline, but they will appreciate it when they ask for something and get it quickly.
Not Quite that Easy…
The problem for many teams, of course, is that ideas may be too plentiful and some ideas may linger for months while higher-value ones are delivered more rapidly. This observation points to other options for measurement. We could split out our lead time for high value/high priority work items from the lower value ones. This shows the team how responsive it is to the highest demands of the business.
Alternatively, we could simply measure the time from when the team started work on an idea until it was delivered. This loses some of the focus on the business, but does show the effectiveness of feature delivery more comprehensively than code-to-production lead times. Times such as design, development, and code reviews are better captured.
So What Should We Track?
The good news is that the data are mostly all available. Tools in the new “value stream management” space are getting good at stitching that data together from multiple sources.
I would track three things:
- Code-to-production: This measures the most repeatable elements of delivery, leaving out the more highly variable creative elements. It is good at telling the team how good they are at the mechanics of delivery.
- Dev-to-use: This measures how long it takes a development team to take a feature they have been asked for, until delivery to users. It is excellent for focusing the team on cleaning up their internal handoffs and wait stages such as code reviews.
- Idea-to-value: Measure how long it takes to get an idea back to the business. This is the most important measure for business impact and surfaces problems with long planning cycles well. Restrict it to high priority items if you need to.
Comparing these times can be incredibly valuable. Is most of the time required to deliver a high-value feature reflected in the code-to-prod cycle? Improving testing processes or adjusting change approval processes might be the next continuous improvement priority. Conversely, if ideas flow from developers to production quickly, but are bogged down in prioritization processes, better planning processes or even more developers could be called for.
Development teams will tend to want to track what they can best control. Tracking the full idea to value chain often won’t feel fair. However, those ideas from the business are why the development team exists. Tracking how quickly ideas can be turned around will reflect the degree to which the team supports business agility and in today’s climate, responsiveness is key. Sure, track more tech-centric metrics to help spot opportunities to improve, but don’t lose sight of the business.
Improving the Three Lead Times
The 2018 State of DevOps Report showed that “Elite” teams delivered new code to production in under an hour, while high performing teams delivered in one to seven days. Low performers delivered one to six months after commit. So, there's about an order of magnitude between low and high performers and another order of magnitude or two to get to Elite.
Speeding time to production release is generally a factor of two elements. First is time to quality determination. Applications must be delivered quickly into prepared test environments, and many tests run quickly (performance, functional, API, Security). The second major change is to processes. If production is only updated on set days after formal change reviews as part of a quarterly release cycle, technological improvements will be squandered. So automate continuous delivery, test environment prep, and testing. Then change the rules to reflect your quicker analysis of quality and lower risk profiles of your changes inherent in smaller batch sizes.
Tracking how long it takes from an idea to be released to the development team to delivery will typically be the sum of design, development, peer review and code-to-prod. To speed cycle time here, ensuring that design and development are free to focus on specific items rather than being asked to work on many items at once is key. Kanban-type approaches with fierce work in progress (WIP) limits can be critical. Often design can be a bottleneck, so limiting the number of work items that require design can keep those that do flowing while routing backend changes around the designers.
One of my development teams instrumented their process with a precursor to UrbanCode Velocity and uncovered a several-day lag between pull request and code review/merge. It looked something like this:
When this was uncovered, the Scrum Master called an impromptu retrospective. It turned out, the delays were real and were leading to real resentment amongst the team. Without the data being seen as impacting everyone, most of the developers were quietly insulted that their content was being ignored. The team agreed on changes to process and who did which reviews, and eased a bottleneck that had been hidden.
Building on the actions for code-to-prod and dev-to-use, further improvements work on either end of the spectrum. In work prioritization, more frequent or continuous prioritization approaches are called for. This requires more frequent collaboration with the business, or a dedicated product manager who is in tune with what the business needs. The development team may be distrustful of changing priorities particularly if told to stop working on things that in progress. Avoiding that is healthy (hence the sacrosanct sprint in Scrum) but as important is a measurement of business impact that is available back to development. If high priority changes implemented quickly are shown to actually benefit the business, changing priorities are more likely to be met with enthusiasm than despair.
A technology team can most directly engineer their way to faster delivery from code to production. Challenges that are mostly solved with technology are appealing technologists. Further, improving that type of delivery does reduce the more important idea-to-value lead time and tends to be welcomed by the business. Engineering management is also pleased, as considerable labor costs are removed by all the automation required for the speed.
It’s attractive. It’s just not sufficient. If code-to-production performance is improving but for some reason idea-to-value is not, the business will tend to perceive the technology team as making large investments in shiny objects that are accomplishing nothing for them. That is a dangerous moment. If instead the team is also tracking their idea-to-value performance and shows the business improvement there, the team’s credibility will grow and suggestions around process change will more likely find enthusiastic support from the business. Transparency builds trust. Trust supports speed. Speed wins in the market.
Opinions expressed by DZone contributors are their own.