How to Use Data to Improve Your Sprint Retrospectives

DZone 's Guide to

How to Use Data to Improve Your Sprint Retrospectives

A tutorial for improving your sprint retrospectives.

· Agile Zone ·
Free Resource

Most agile teams do sprint retrospectives at least once a month, to iterate and improve on their software development process and workflow. However, a lot of those same teams rely only on their feelings to “know” if they have actually improved. But you need an unbiased reference system if you want to compare how two sprints went.

Depending on what you’re focusing on, there are metrics that you might be interested in. For instance, if your team uses estimation, tracking how those estimates pan out could be worthy indeed, and comparing the variance across sprints could provide such a metric.

In this article, I will not be focusing on metrics that are the output of your engineering team, but rather on indicators showing the health of your team’s collaboration, which will directly impact the output. It might be worthwhile to consider them during your sprint retrospectives.

1. Lead Time  and Cycle Time

Lead time is the time period between the beginning of a project’s development and its delivery to the customer. Your software development team’s lead time history can help you predict with a higher degree of accuracy when an item might be ready. This data is useful even if your team doesn’t provide estimates since the predictions can be based on the lead times of similar projects.

If you want to be more responsive to your customers, work to reduce your lead time, typically by simplifying decision-making and reducing wait time. Lead time includes cycle time.

Cycle time describes how long it takes to change the software system and implement that change in production. Teams using continuous delivery can have cycle times measured in minutes or even seconds instead of months.

When to Use Them?

If your priority is to implement continuous delivery or to make your process leaner and deploy in production more frequent smaller batches, these two metrics will be very useful. Within lead time, you could also dive a bit deeper to understand where most of the time is spent.

2. Deployment Frequency 

Tracking how often you do deployments is a good DevOps metric. Ultimately, the goal is to do more smaller deployments as often as possible. Reducing the size of deployments makes it easier to test and release.

How often you deploy to QA or pre-production environments is also important. You need to deploy early and often in QA to ensure time for testing. Finding bugs in QA is important to keep your defect escape rate down. But you might want to count production and non-production deployments separately.

When to Use Them?

This metric is a good complement to lead and cycle times, in the sense that it shows their results.

3. Commit Frequency or Active Days

Commit frequency and active days serve the same purposes. An active day is a day in which an engineer contributed code to the project, which includes specific tasks such as writing and reviewing code.

Those two alternative metrics are interesting if you want to introduce a best practice to commit every day. It’s also a great way to see the hidden costs of interruptions. Non-coding tasks such as planning, meetings, and chasing down specs are inevitable. Teams often lose at least one day each week to these activities. Monitoring the commit frequency enables you to see which meetings have an impact on your team’s ability to push code. It’s important to keep in mind that pushing code is actually the primary way your team provides value to your company.

Managers should strive to protect their team’s attention and ensure the process-overhead does not become a burden.

When to Use Them?

Have you heard “Commit Often, Perfect Later, Publish Once”? If you fail to commit and then do something poorly thought out, you can run into trouble. Commits are the common denominator for collaboration within your team. So if you push in your workflow to commit more often than the team currently does, it might be useful to track this metric. Plus, as mentioned above, if you want to understand the impact of interruptions, this metric could be a good starting point.

4. Pull Requests-Related Velocity

There are several metrics that could be interesting to you.

  • the number of pull requests opened per week
  • the number of pull requests merged per week
  • the average time to merge. Some alternative could be the percentage of pull requests merged under a certain time. This is somewhat equivalent to the cycle time (time it takes for the code to go from committing to deploy: in between, it could go through testing, QA, and staging, depending on your organization). It’s a very interesting metric that shows you what roadblocks you’re encountering in your workflow.

When to Use Them?

These metrics could give you a sense of the constant throughput of your engineering team. For instance, if that number doesn’t grow when you hire more people, there might be a problem related to a new process in place or a technical debt that needs to be addressed. However, if it increases too quickly you might have a quality issue.

5. Work In Progress (WIP)

Work in Progress is the total number of tickets that your team has open and are currently working on. It is an objective measure of a team’s speed that is similar to throughput, as a real-time indicator (rather than a lagging one).

This metric is helpful for understanding a team’s current workload as a trend. Ideally, the number will stay stable over time, as an increase in WIP means that your team is facing blockers/bottlenecks that aren’t getting addressed (unless you added team members, of course). WIP is also a method for identifying inefficient processes.

You might also consider creating a metric that divides WIP by the number of contributors to get an average number of WIP per Developer. Ideally, this number will be close to a one-to-one ratio.

When to Use Them?

This metric helps to avoid burnout and increase efficiency, as working on one thing at a time has been shown to improve focus.

6. Commit or Pull Request Risks

You can determine the risks in a commit or pull request by:

  • The amount of code in the change.
  • What percentage of the work are edits to old code?
  • The surface area of the change (think ‘number of edit locations’).
  • The number of files affected.
  • The severity of changes when the old code is modified.
  • How this change compares to others from the project history.

It shows the amount of reflection and work has been done in the commits, and therefore the potential impact on the product if deployed without code review.

When to Use Them?

Monitoring an average commit or pull request risk helps you understand how your team works, and whether you should strive for more frequent and simpler code changes.

Please note that some metrics may have not made the list because they were either not popular enough, or they would only apply in certain cases (such as estimations). I tried to focus mostly on metrics that might be common to all teams.

Some metrics you may have in mind might also be part of velocity-related metrics and quality-related metrics.

Let me know what you think if I’ve missed any. The end goal is to build a comprehensive list of best practices concerning software engineering metrics to help teams improve their own processes.

agile, agile development, developer, metrics, process, software quality, tutorial

Published at DZone with permission of John Lafleur. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}