Back in the bad old days of battleship gray UI, rounded corners made up with GIF files, and “Best Viewed With Netscape Navigator 3.0” badges, businesses just wanted their software teams to ship — quality be damned! So, which software team KPIs would lead to delivering better? In today's modern software world, things are much different. Technology leaders are expected to deliver on time.
Poor software team KPIs lead to poor user experiences.
Software development teams are also expected to meet modern demands:
- Quality software that doesn’t crash.
- Software that is fast.
- Software that communicates how users are using the software and what their experience is.
- The software delivery process is repeatable and the team is Agile.
Competition is fierce in the software industry, team members are expensive, and doing great work keeps the lights on.
What Gets Measured Gets Managed
It’s important to measure these metrics to help the entire team manage the user experience. Let’s get real for a second: your team is employed because you have customers. If your software is terrible, you will lose customers (71% of users will stop using software after an error!)
Our industry is maturing, and expectations increase with maturity.
What Should You Measure?
At Raygun, we hold senior development team members accountable to several metrics that we can automatically track:
Users Affected By Bugs
Frankly, error counts are misdirection. If you have 10,000 errors that affect one customer, it’s not as bad as 500 errors affecting 250 customers. Measure the affected customers on a monthly basis, with a goal to reduce.
Median Application Response Time
Forget averages. They lie. The median response time is what 50% of your customers experience (or faster). Track that. and hold the team accountable to achieving a time or better. Performance makes money. After all, 40% of users will leave a website that takes more than three seconds to load!
P99 Application Response Time
Medians are great, but we also need to appreciate the upper limit. We choose to track the P99 — the time taken for the 99th percentile of users. This will normally be slow, but we want to make sure it’s more like five seconds slow, not 25 seconds slow. We don’t often track P100, as it’s the domain of total timeouts and bad bots that hold connections open, and is generally misleading about real users.
Resolved Bugs >= New Bugs
Some platforms will group bugs by their root cause. This makes it easy to manage the bug count, not the crash count (instances of a bug encountered). The team should be fixing bugs at least as quickly as they are creating them (in a brownfields project, ideally resolving more than are created).
These metrics will make your software stand out as being best-in-class. It will oblige your software teams to:
- Improve user experiences with fewer crashes.
- Improve user experiences with faster software.
- Reduce technical debt accumulation. It won’t fix all technical debt, but fast, bug-free software is nearly always better to work on than slow buggy software.
What about KPIs like monthly active users, geographic breakdown, feature tracking, etc.?
These metrics are very important to track but are typically tracked by Product Managers, Marketing Managers, etc. We will do a follow-up post about how these software team KPIs can be tracked alongside these metrics.
What Other Metrics Do You Track?
I’d love to get some comments about other ways that software leaders are tracking their KPIs and what those KPIs are. Building great software takes a lot of effort, but tracking the right metrics makes it easier.