Oh how I hope you don’t measure developer productivity by lines of code. As Bill Gates once ably put it, “measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs.” No doubt, you have other, better reasoned metrics that you capture for visible progress and quality barometers. Automated test coverage is popular (though be careful with that one). Counts of defects or trends in defect reduction are another one. And of course, in our modern, agile world, sprint velocity is ubiquitous.
But today, I’d like to venture off the beaten path a bit and take you through some metrics that might be unfamiliar to you, particularly if you’re no longer technical (or weren’t ever). But don’t leave if that describes you — I’ll help you understand the significance of these metrics, even if you won’t necessarily understand all of the nitty-gritty details.
Perhaps the most significant factor here is that the metrics I’ll go through can be tied, relatively easily, to stakeholder value in projects. In other words, I won’t just tell you the significance of the metrics in terms of what they say about the code. I’ll also describe what they mean for people invested in the project’s outcome.
It’s possible that you’ve heard of the concept of Page Rank. If you haven’t, page rank was, for a long time, the method by which Google determined which sites on the internet were most important. This should make intuitive sense on some level. Amazon has a high page rank — if it went down, millions of lives would be disrupted, stocks would plummet, and all sorts of chaos would ensure. The blog you created that one time and totally meant to add to over the years has a low page rank — no one, yourself included, would notice if it stopped working.
It turns out that you can actually reason about pieces of code in a very similar way. Some bits of code in the code base are extremely important to the system, with inbound and outbound dependencies. Others exist at the very periphery or are even completely useless (see the section on dead code). Not all code is created equally. This scheme for ranking code by importance is called “Type Rank” (at least at the level of type granularity — methods can also be ranked).
You can use Type Rank to create a release riskiness score. All you’d really need to do is have a build that tabulated which types had been modified and what their type rank was, and this would create a composite index of release riskiness. Each time you were gearing up for deployment, you could look at the score. If it were higher than normal, you’d want to budget extra time and money for additional testing efforts and issue remediation strategies.
Cohesion of modules in a code base can loosely be described as “how well is the code base organized?” To put it a bit more concretely, cohesion is the idea that things with common interest are grouped together while unrelated things are not. A cohesive house would have specialized rooms for certain purposes: food preparation, food consumption, family time, sleeping, etc. A non-cohesive house would have elements of all of those things strewn about all over the house, resulting in a scenario where a broken refrigerator fan might mean you couldn’t sleep or work at your desk due to noise.
Keeping track of the aggregate cohesiveness score of a codebase will give you insight into how likely your team is to look ridiculous in the face of an issue. Code bases with low cohesion are ones in which unrelated functionality is bolted together inappropriately, and this sort of thing results in really, really odd looking bugs that can erode your credibility.
Imagine speaking on your team’s behalf and explaining a bug that resulted in a significant amount of client data being clobbered. When pressed for the root cause, you had to look the person asking directly in the eye and say, “well, that happened because we changed the font of the labels on the login page.”
You would sound ridiculous. You’d know it. The person you were talking to would know it. And you’d find your credibility quickly evaporating. Keeping track of cohesion lets you keep track of the likelihood of something like that.
So far, I’ve talked about managing risk as it pertains to defects: the risk of encountering them on release, and the risk of encountering weird or embarrassing ones. I’m going to switch gears, now, and talk about the risk of being caught flat-footed, unable to respond to a changing environment or a critical business need.
Dependency cycles in your code base represent a form of inappropriate coupling. These are situations where two or more things are mutually dependent in an architectural world where it is far better for dependencies to flow one way. As a silly but memorable example, consider the situation of charging your phone, where your phone depends on your house’s electrical system to be charged. Would you hire an electrician to come in and create a situation where your house’s electricity depended on the presence of your charging phone?
All too often, we do this in code, and it creates situations as ludicrous as the phone-electrical example would. When the business asks, “how hard would it be to use a different logging framework,” you don’t want the answer to be, “we’d basically have to rewrite everything from scratch.” That makes as much sense as not being able to take your phone with you anywhere because your appliances would stop working.
So, keep an eye out for dependency cycles. These are the early warning light indicators that you’re heading for something like this.
One last thing to keep an eye out for is dead code. Dead code is code that can never possibly be called during the running application’s lifecycle. It just sits in your codebase taking up space to no good end.
That may sound benign, but every line of code in your code base carries a small, cognitive maintenance weight. The more code there is, the more results come back in text searches of the code base, the more files there are to lose and confuse developers, and the more general friction is encountered when working with the system. This has a very real cost in the labor required to maintain the code.
These are metrics about which fewer people know, so the industry isn’t rife with stories about people gaming them, the way it is with something like unit test coverage. But that doesn’t mean they can’t be gamed. For instance, it’s possible to have a nightmarish code base without any actual dead code — perversely, dead code could be eliminated by finding everything useless in the code base and implementing calls to it.
The metrics I’ve outlined today, if you make them big and visible to all, should serve as a conversation starter. Why did we introduce a dependency cycle? Should we be concerned about the lack of cohesion in modules? Use them in this fashion, and your group can save real money and produce better output. Use them in the wrong fashion, and they’ll be just another ineffective management bludgeon straight out of a Dilbert comic.