Since software development has started, there has been a a lot of chaos, and people always ask "is it done right?" We get a mix of answers:
- The oldest one: Well, it compiles
- It seems to work
- The universal favorite: The users aren't complaining (Until the user starts complaining or we have to add new feature, then we can figure out how well we did it)
- Most recent answer: The automated test cases (How do you know if you have enough tests, and furthermore what about the things which can't be covered by tests)
How do we evaluate the quality of the code and the developer who has written the code? It is easy enough to evaluate factory workers ( what is produced with acceptable quality), lawyers (cases won) etc.
Well it can be answered if we can measure software quality. Software quality can be defined easily through abstractions, examining it from different perspectives and rating it along different dimensions
Let's put ourselves into a test, let's see if we can read this:
I cdnuolt blveiee taht I cluod aculaclty uesdnatnrd waht I was rdgnieg. The phaonmneal pweor of the hmuan mnid. It deosn't mttaer in waht oredr the leteerrs in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae. The rset can be a taotl msess and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.
The preceding text does not contain one single word spelled correctly but proves to be readable. From a product perspective, someone could support that although the text is flawed it does the job, since it manages to remain understandable. But this has the side effect of deteriorating the final reading experience, requiring additional effort to reconstruct the words and phrases. The reader unconsciously stresses his mind in an effort to adapt and decipher the messed-up words. On the other hand, the editor assigned to improve or add to the text would have to cope with this non-standard writing practice delaying the whole process.
Switch the corrupt text for a software product's source code. The reader is now the end user of the product and the editor the developer. They both experience product quality differently, each one from their own views. The end user from a functional perspective, while the developer from a structural one.
Software quality measurement is a quantitative process summing up weighted attribute values, which in part describe specific software characteristics. For each characteristic, a set of such measurable attributes is defined.
Now the question is, what are software characteristics? Well it could be:
- Whether the coding has been done following a specific convention
- Whether well-known/established good practices have been followed and well-known/established bad practices have been avoided
- Are there any potential bugs and performance issues, security vulnerabilities
- Is there any duplicate code
- Is the code logic very complex
- Whether the public API has good documentation and comments
- Whether the code has unit tests
- Whether the code follows good design and architecture principles
How do we define the corresponding attributes?
After having the software characteristics defined, the next question which comes to our mind is how do we enforce it automatically? The answer lies in static code analysis.
Static Code Analysis
Static code analysis is a collection of algorithms and techniques used to analyze source code in order to automatically find potential errors or poor coding practices. The idea is similar in spirit to compiler warnings (which can be useful for finding coding errors), but to take that idea a step further and find bugs that are traditionally found using run-time debugging techniques such as testing.
Static code analysis, also commonly called "white-box" testing, looks at applications in non-runtime environments. It is the only proven method to cover the entire code base and identify all the vulnerable patterns. Static code analysis is also considered as a way to automate code review process.
The tasks solved by static code analysis software can be divided into 3 categories:
- Detecting errors in programs
- Recommendations on code formatting. Some static analyzers allow you to check if the source code corresponds to the code formatting standard accepted in your company
- Metrics computation. Software metrics are a measure that lets you get a numerical value of some property of software or its specifications
There are many static analysis tools available. However, Checkstyle, PMD, and FindBugs are well-known and used in most of the projects
Checkstyle is an open source tool that can help enforce coding standards and best practices, with a particular focus on coding conventions. Checkstyle does cover some static code analysis features (in much the same way as PMD and Findbugs), however we will mainly concentrate on detecting and enforcing coding conventions with Checkstyle.
Main Focus: Conventions
PMD is a static code analysis tool capable of automatically detecting a wide range of potential defects and unsafe or non-optimized code (bad practices). Whereas other tools such as Checkstyle can verify that coding conventions and standards are respected, PMD focuses more on preemptive defect detection (ensuring good practices are followed). It comes with a rich and highly configurable set of rules, and you can easily configure which particular rules should be used for a given project.
The bad practices type consists of well-known behaviors that almost systematically lead to difficulties over time. Here are a few examples of bad practices:
- Catching an exception without doing anything
- Having dead code
- Too many complex methods
- Direct use of implementations instead of interfaces
- Implementing the hashcode() method without the not equals(Object object) method
- Synchronization on Boolean (could lead to deadlock)
- May expose internal representation by returning reference to mutable object
Main Focus: Bad practices
FindBugs is another static analysis tool for Java, similar in some ways to Checkstyle and PMD, but with a quite different focus. FindBugs is not concerned in formatting or coding standards, and only marginally interested in best practices: in fact, it concentrates on detecting potential bugs and performance issues. It does a very good job of finding these, and can detect many types of common, hard-to-find bugs. Indeed, FindBugs is capable of detecting quite a different set of issues than PMD or Checkstyle with a relatively high degree of precision. As such, it can be a useful addition to your static analysis toolbox.
Main Focus: Potential Bugs
As per the HP Fortify website --
"HP Fortify Static Code Analyzer helps verify that your software is trustworthy, reduces costs, increases productivity, and implements secure coding best practices ..."
- Reduces business risk by identifying vulnerabilities that pose the biggest threat
- Identify and remove exploitable vulnerabilities quickly with a repeatable process
- Reduces development costs by identifying vulnerabilities early in the SDLC
- Educates developers in secure coding practices while they work
- Brings development and security teams together to find and fix security issues
Main Focus : Security Vulnerabilities
SonarQube collects and analyzes source code, measuring quality and providing reports for your projects. It combines static and dynamic analysis tools and enables quality to be measured continuously over time. Everything that affects our code base, from minor styling details to critical design errors, is inspected and evaluated by SonarQube, thereby enabling developers to access and track code analysis data ranging from styling errors, potential bugs, and code defects to design inefficiencies, code duplication, lack of test coverage, and excess complexity. The Sonar platform analyzes source code from different aspects and hence it drills down to your code layer by layer, moving from the module level down to the class level. At each level, SonarQube produces metric values and statistics, revealing problematic areas in the source that require inspection or improvement.
You may wonder if SonarQube uses existing, proven, tools then why use it at all? You can just configure these tools as a plugin in the CI server and bang we will be done. Well not necessarily, well there are lots of caveats.
- As of now, CI tools does not have a plugin which would make all these play together
- As of now, CI tools does not have pugins to provide nice drill-down features as SonarQube does
- CI plugins does not talk about overall compliance value
- CI plugins does not provide managerial perspective
- As of now there is no CI plugin for Design/Architecture issues
- It does not provide a dashboard for overall projects quality
- SonarQube doesn't just show you what's wrong. It also offers quality-management tools to actively help you put it right
- SonarQube's commercial competitors seem to focus their definition of quality mainly on bugs and complexity, whereas SonarQube's offerings span what its creators call the Seven Axes of Quality
- SonarQube addresses not just bugs but also coding rules, test coverage, duplications, API documentation, complexity, and architecture, providing all these details in a dashboard
- It gives you a moment-in-time snapshot of your code quality today, as well as trends of lagging (what's already gone wrong) and leading (what's likely to go wrong in the future) quality indicators
- It provides you with metrics to help you take right decision. In nearly every industry, serious leaders track metrics. Whether it's manufacturing defects and waste, sales and revenue, or baseball hits and RBIs, there are metrics that tell you how you're doing: if you're doing well overall, or whether you're getting better or worse.
What makes SonarQube really stand out is that it not only provides metrics and statistics about your code, but translates these non-descript values to real business values such as risk and technical debt. SonarQube not only addresses core developers and programmers but, project managers and even higher managerial levels due to the management aspect it offers. This concept is further strengthened by SonarQube's enhanced reporting capabilities and multiple views addressing source code from different perspectives.
From a managerial perspective, transparent and continuous access on historical data enables the manager to ask the right questions.
Note: SonarQube is in no way competing with any of the above static analysis tools, but rather it complements and works very well with these tools. In fact, it ceases to work if these static analysis tools (Checkstyle, PMD, and FindBugs) do not exist. Further, we can integrate Fortify with SonarQube using this plugin.