The Evolution of Software Quality Processes and Tools
The Evolution of Software Quality Processes and Tools
A history of software quality analysis methodologies and tools that ensure internal quality and the philosophies behind them.
Join the DZone community and get the full member experience.Join For Free
Discover how TDM Is Essential To Achieving Quality At Speed For Agile, DevOps, And Continuous Delivery. Brought to you in partnership with CA Technologies.
In this post I discuss the history of software quality analysis approaches and tools that focus on ensuring internal quality (maintainability, comprehensibility etc.). I will distinguish different generations of architectures of quality analysis tools and the quality assurance mindset behind them. This categorizations allows of course no clear cut. Furthermore, as managing partner of CQSE as a tool provider my point of view is certainly biased. So please regard this post as a very personal view onto the evolution of software quality analysis methods. Thus, the three generations I distinguish represent our own learning curve as a company (since 2009) and as researchers (since 2005 at TU Munich) in this area.
When we started developing Teamscale I was often asked by customers, former colleagues and friends why we are developing a new software quality analysis tool as several already existed (including our own tool ConQAT or the even more well-known alternative SonarQube). The answer was and is rather simple: Because all of the tools and approaches we saw and used before did not have significant impact on software quality at all. Lehman’s Law that quality declines over time seemed to be inevitable.
1st Generation: Quality Gates and Snapshot-driven local Analyses
In 2005, when we started our research on software quality, we perceived the usual situation in practice as illustrated in the figure above. That time quality analyses were usually not integrated in developer’s everyday work. The development team checked their changes into the version management systems. The quality analyses were usually performed by a dedicated quality engineer on his local computer at a certain time usually in the context of a quality gate. The idea was that this dedicated role was capable to perform in-depth analyses with specialized analysis tools. The result of his work were issues he put onto a list of tasks that needed to be done in order to pass the quality gate.
Effects: (Almost) No Impact on Quality.
The reason why such approaches fail is that the analyses are not performed on a continuous basis. In most cases the gates are close to releases. Between two quality gates usually big amount of deficits accumulate. Thus, they cannot be removed in the limited time before a release. Actually it is even not a good idea to massively remove maintenance deficits shortly before a release as the risks of introducing bugs is high.
2nd Generation: Agile Spirit and the Nightly Build
The second generation of software quality analysis approaches took place when agile development processes became en-vouge. The former principle that the quality engineer was responsible for the quality of the codebase was substituted by the vision that the whole team will take responsibility. Consequently, the quality engineer became obsolete and the tools that were used by him were integrated into the nightly build. Continuous Integration was also a rising concept at that days. This was also the time we started developing ConQAT as a tool that was designed for configuring project-specific quality analyses that were executed in a nightly build and emitted a HTML presentation of the different analysis results. Also Sonar (now SonarQube) was born at that time following a similar concept. Thus, all developers were able to inspect the results using an ordinary browser.
Effects: Low Impact on Quality.
At that time is was surprising that although in principle all relevant information was available every time, almost no impact on quality could be achieved. Asking developers why they did not use the analysis results in their everyday work revealed the major problems:
- Developers get flooded with thousands of findings in the dashboard,
- No personal feedback on changes and
- No strategy which findings should be addressed.
3rd Generation: Individual Commit-driven Feedback
The main idea behind this approach is reversing the information flow to the developers. Instead of the need to pull the findings from dashboards (or even worse, executing analyses locally), we push the information to the developers’ IDE. Without additional efforts developers get the information relevant for their working context and can remove findings as they work on the code anyway. After every commit they achieve a feedback if they introduced new findings or changed code that already contained findings. This close and fast feedback loop allows addressing findings immediately when the developer is working on the affected code. At that point in time he is still familiar with the modifications what simplifies fixing the findings. No additional efforts for program comprehension and testing are required.
To achieve this, the feedback must be fast, thus the long running analyses in the nightly build are not sufficient. The analyses need to be done on the change sets and driven by commits to version control system what requires a real incremental analysis. Especially for findings which cannot be detected locally in a single file (clone detection, architecture analyses etc.) different algorithms are needed compared to the batch-oriented tools of the 2nd generation. Teamscale is currently the only tool available that is based on such a real incremental analysis engine. Due to its radically new architecture we regard it as the next generation of software quality analysis.
Effects: So far, the experience we observe at our customers confirm the impact on quality: In most projects developers react on issues that they introduced with their changes and the code quality gradually improves.
While the immediate feedback enables developers to improve code quality while producing new code or modifying old one. We usually recommend our customers to (re-)establish the role of a Quality Engineer. The activities of this role are of course different to the one in the quality gate era. The Quality Engineer performs regular inspections of the findings, however, always with a strong focus on findings in code which was modified during the last development iteration. He furthermore supports developers in fixing these and coaches them if needed. He continuously updates coding guidelines and corresponding analysis configurations. Using commit-driven analyses, the Quality Engineer gets a much clearer understanding when a finding was introduced and even by whom. This transparency enables this role to be much more effective than in the past.
Published at DZone with permission of Nils Göde , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.