Join the DZone community and get the full member experience.Join For Free
Atomist automates your software deliver experience. It's how modern teams deliver modern software.
How do you know if using code metrics really does help to produce code with fewer bugs.
We can use historic data and some code forensics to prove it.
All projects have historic data. This is usually stored in your bug tracking and source code control tools.
We can use the data stored in these systems to perform ‘code forensics.’
We use the historic data from real issues to see if they could have been avoided.
This can all be done without affecting any of your existing code or adding any risk to your project.
Surely that’s a useful Software Engineering technique?
Firstly I realise that most bugs you find in a standard project are not caused by code quality – it’s probably only a small percentage. However the ones that exist are avoidable.
It is these avoidable quality issues that I want to concentrate on. I want to be able to determine when exceeding a metric threshold is likely to result in a problem
It’s possible that if enough code forensics are run on my individual code base, I may be able to come up with some numbers that are useful to me in the future.
In the long term it may be possible for someone to do a large study and come up with better guidelines.
The process is quite straightforward.
1. Query your bug tracking tool for all the issues that required a code fix.
2. Assess the defects.
3. Identify the code.
4. Get the root cause.
Query your bug tracking tool.
First thing you need to do is to identify all your recent bugs, let’s say for the last month.
Do a simple query to bring back all of the bugs during that period. This should be easy - otherwise you’re using the wrong tool!
Now you have a full list of all of the defects that you are potentially interested in
Assess the defects.
You now need to go through each of the bugs and assess whether the issue really was a code issue.
Other things it might be include.
• A requirements issue.
• An issue with the deployment environment.
• Configuration Issues
What you are left with is a list of issues that were really caused because of bad code.
Identify the problematic code.
You now need to map your list of issues back to the relevant source.
You will not be able to do this unless you have been disciplined with your check in comments. In most places I have worked, when checking in a bug fix you always start it with a reference to the problem it fixes.
Assuming you have been commenting your commits with the reference, you can do a simple query to see which code was affected. This can be done in fisheye, tortoise etc to get the required code
Get to the root cause.
Finally you have something to look at, so what do you do with it? Well first you have to understand how the fix works and decide if it was a code quality issue. Perhaps the issue was a simple error rather than something a metric would have caught.
However you might open the code and find like this. The average complexity in our system is 10. This piece of code has a complexity of 106!
This was an accident waiting to happen!
Clearly the bug would have been more likely to have been caught, had we failed the build because the code failed to meet expected quality standards. This is a potentially avoidable error.
Another way to try and establish a link between poor code quality and defects is to take advantage of something such as the Sonar hotspot view to see the most complex classes in you system.
You can then work backwards and examine the history of those files to see if those classes are causing issues in your codebase.
The trouble is that it is not that simple. High complexity files, which are used infrequently, are less likely to cause you trouble than those which are more frequently used, but of a lower complexity.
Automating the process.
For this to be any use it probably needs to be automated so that a large sample of data can be examined. Some tools already make this link between defects and the related fix source code.
The next step is to pull that data back and run your metrics analysis on the files.
None of this is conclusive, however I still think it's a useful technique.
What it is most likely to prove is that you have had past problems which you could have avoided with metrics. It should also give you an idea of which metrics to use.
It's also likely to show that most problems are not caused by poor code quality, but other factors instead.
The original article is here
Opinions expressed by DZone contributors are their own.