Join the DZone community and get the full member experience.Join For Free
how good is your code? if you’re like the other 80% of above average developers, then i bet your code is pretty awesome. but are you sure? how can you tell? or perhaps you’re working on a legacy code base – just how bad is the code? and is it getting better? code metrics provide a way of measuring your code – some people love ‘em, but some hate ‘em.
personally i’ve always found metrics very useful. for example – code coverage tools like emma can give you a great insight into where you do and don’t have test coverage. before embarking on an epic refactor of a particular package, just how much coverage is there? maybe i should increase the test coverage before i start tearing the code apart.
another interesting metric can be lines of code. while working in a legacy code base (and who isn’t?), if you can keep velocity consistent (so you’re still delivering features) but keep the volume of inventory the same or less , then you’re making the code less crappy while still delivering value. any idiot can implement a feature by writing bucket loads of new code, but it takes real craftsmanship to deliver new features and reduce the size of the code base.
the problem with any metric is who consumes it. the last thing you want is for an over eager manager to start monitoring it.
you can’t control what you can’t measure– tom demarco
before you know it, there’s a bonus attached to the number of defects raised. or there’s a code coverage target everyone is “encouraged” to meet.
as soon as there’s management pressure on a metric, smart people will game the system. i’ve lost count of the number of times i’ve seen people gaming code coverage metrics. in an effort to please a well meaning but fundamentally misguided manager, developers end up writing tests with no assertions. sure, the code ran and didn’t blow up. but did it do the right thing? who knows! and if you introduce bugs, will your tests catch it? hell, no! so your coverage is useless.
the target was met but the underlying goal – improving software quality – has not only been missed, it’s now harder to meet in future.
the goal of any metric is to measure something useful about the code base. take code coverage, for example – really what we’re interested in is defect coverage . that is, out of the universe of all possible defects in our code, how many would cause a failure in at least one test? that’s what we want to know – how protected are we against regressions in the code base.
the trouble is, how can i measure “the universe of all possible defects” in a system? its basically unknowable. instead, we use code coverage as an approximation. given that tests assert the code did the right thing, the percentage of code that has been executed is a good estimation of the likelihood of bugs being caught by them. if my tests execute 50% of the code, at best i can catch bugs in 50% of the code. if there are bugs in the other 50%, there’s zero chance my tests will find them. code coverage is an upper bound on test coverage. but, if your tests are shoddy, test coverage can be much lower. to the point where tests with no assertions are basically useless.
and this is the difficulty with metrics: measuring what really matters – the quality of our software – is hard, if not impossible. so instead we have to measure what we can, but it isn’t always clear how that relates to our underlying goal.
but what does it mean ?
there are some excellent tools out there like sonar that give you a great overview of your code using a variety of common metrics. the trouble often is that developers don’t know (or care) what they mean. is a complexity of 17.0 / class good or bad? i’m 5.6% tangled – but maybe there’s a good reason for that. what’s a reasonable target for this code base ? and is lcom4 a good thing or a bad thing? it sounds like a cancer treatment, to be honest.
sure, if i’m motivated enough i can dig in and figure out what each metric means and we can try and agree reasonable targets and blah blah blah. c’mon, i’m busy delivering business value . i don’t have time for that crap. it’s all just too subtle so it gets ignored. except by management.
a better way
surely there’s got to be a better way to measure “code quality”?
whatever you measure, its important the team agree and understand what it means. if there’s a measure half the team don’t agree with, then its unlikely it will get better. some people will work towards improving it, others won’t so will let it get worse. the net effect is likely to be heartache and grief all round.
2. measure what’s important
you don’t have to measure the “standard” things – like code coverage or cyclomatic complexity. as long as the team agree its a useful thing to measure, everyone agrees it needs improving and can commit to improving it – then its a useful measure.
a colleague of mine at youdevise spent his 10% time building a tool to track and graph various measures of our code base. but, rather unusually, these weren’t the usual metrics that the big static analysis tools gather – these were much more tightly focused, much more specific to the issues we face. so what kind of things can you measure easily yourself?
- if you have a god class, why not count the number of lines in the file? less is better.
- if you have a 3rd party library you’re trying to get rid of, why not count the number of references to it.
- if you have a class you’re trying to eliminate, why not count the number of times its imported?
these simple measures represent real technical debt we want to remove – by removing technical debt we will be improving the quality of our code base. they can also be incredibly easy to gather, the most naive approach only needs grep & wc.
it doesn’t matter what you measure, as long as the team believe whatever you do measure should be improved; then it gives you an insight into the quality of your code base, using a measure you care about .
3. make it visible
finally, put this on a screen somewhere – next to your build status is good. that way everyone can see how you’re doing and gets a constant reminder that quality is important. this feedback is vital – you can see when things are getting better and, just as importantly, when things start to slip and the graph veers ominously upwards.
keep it simple, stupid
code quality is such an abstract concept its impossible to measure. instead, focus on specific things you can measure easily. the simpler the metric is to understand the easier it is to improve. if you have to explain what a metric means you’re doing it wrong . try and focus on just a few things at any one time – if you’re tracking 100 different metrics its going to be sheer luck that on average they’re all getting better. if we instead focus on half a dozen, i can remember them – the very least i’ll do is not let them get worse; and if i can, they’ll be clear in my mind so i can improve them.
do you use metrics? if so, what do you measure? if not, do you think there’s something you could measure?
Opinions expressed by DZone contributors are their own.