Quality Tools: Humble Servants or Tyrants?
Quality tools will increase the internal quality of code, but SonarQube may be one of the better tools for software development. Nicolas Frankel explains.
Join the DZone community and get the full member experience.Join For Free
I’ve always been an ardent proponent of internal quality in software, because in my various experiences, I’ve had more than my share of crappy codebases to maintain. I believe that quality tools can increase the internal quality of the code, thus decreasing maintenance costs in the long run. However, I don’t think that such tools are the only way to achieve that – I’m also a firm believer in code reviews.
Regarding quality tools, I started with Checkstyle, then with PMD, both static analysis tools. I’ve used FindBugs, a tool that doesn’t check the source code but the bytecode itself, but only sparingly for it seemed to me it reported too many false positives.
Finally, I found SonarQube (called Sonar at the time). I didn’t immediately fall in love with it, and it took me some months to get rid of my former Checkstyle and PMD companions. As soon as I did, however, I wanted to put it in place in every project I worked on – and on others too. When it added a timeline to see the trend regarding violations and other metrics, I knew it was the quality tool to use.
Now that finally the dust has settled, I don’t see many organizations where no quality tool is used and that is good. I don’t imagine working with none: whether as a developer or team lead, whether using Sonar or simpler tools, their added value is simply too big to just ignore.
On the other hand, I’m very wary of a rising trend: it seems as if once Sonar is in place, developers and managers alike treat its reports as the word of God. I can expect it from managers, but I definitely don’t want my fellow developers to set their brain aside and delegate their responsibilities to a tool, whatever the tool. Things even become worse when metrics from those rules are used as build breakers: when the build fails because your project failed to achieve some pre-defined metrics.
Of course, there are some ways to mitigate the problem:
- Use only a subset of Sonar rules. For example, the violation that checks for a private static final
serialVersionUIDattribute if the class directly or transitively implements Serializable is completely useless IMHO.
- Use the
- Configure each project. For example, Vaadin projects should exclude graphical classes from the unit test coverage as they probably have no behavior, thus no associated tests (do you unit test your JSP?).
I’m afraid those are only ways to go around the limits. Every tool comes with a severe limitation: it cannot distinguish between contexts, and applies the same rules regardless of it. As a side note, notice this is also the case for big companies… The funniest part is that software engineers are in general the most active opponents against metrics-driven management – then they put SonarQube in place to assert code quality and they’re stubborn when it comes to contextualising the results.
Quality tools are a big asset toward a more maintainable code base, but stupidly applying one rule because the tool said so – or even worse, riddling your code base with // NOSONAR comments, is a serious mistake. I’m in favor of using tools, not tools ruling me. Know what I mean?
- See more at: http://blog.frankel.ch/quality-tools-humble-servants-or-tyrans#sthash.Vkyvi3Yq.dpuf
Published at DZone with permission of Nicolas Fränkel, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.