Why Continuous Integration Isn't Improving Software (Yet)
Research shows that developers aren't doing continuous integration right, focusing on code quality at the wrong stages of development.
Join the DZone community and get the full member experience.Join For Free
The University Zurich’s study, Continuous Code Quality: Are We (Really) Doing That? quantifies the practices performed by developers to understand how continuous code quality is applied to projects in continuous integration. Their research reveals a strong dichotomy between theory and practice: developers do not perform continuous inspection but rather only focus on controlling software quality at the end of a sprint, and even then, only focus on the release branch.
Continuous code quality, also known as continuous inspection, is a core principle of continuous integration that includes automated testing, automated code inspection and performing static/dynamic code analysis at every build as a way to continuously improve code quality.
The University of Zurich’s findings show that development projects using continuous integration do not continuously inspect the source code –only 11% of builds are checked. Moreover, large systems with many contributors are more likely not to perform continuous code quality inspection. They note that developers are performing code quality inspections but only after several builds (on average every 18 builds) and, most likely, at the end of a sprint. Additionally, inspection on branches is exceptionally low (only 36% of branches are checked), meaning that most of them are developed without a formal quality control.
The complete study that analyzed 148,734 builds over 5 years of change history across 119 projects using SonarCloud and TravisCI is available here.
Continuous integration is extremely effective at reducing the amount of time between discovery and fixing coding issues. It’s clear that better adoption of continuous inspection is crucial in fulfilling the main advantage of CI, improving software quality and reducing software risks. But continuous integration efforts often fail for three primary reasons:
Inconsistency in following DevOps and Agile principles.
CI implementations often ignore technical debt.
CI implementations often ignore system-level analysis.
Continuous Integration: Practicing What We Preach
The Zurich research team is not the only group taking a closer look at how teams are using sound software engineering principles. A team at North Carolina State University and IBM examined 250 projects to better understand how Agile practices affect the software produced using this development method.
In the study, “Doing” Agile versus “Being” Agile, the North Carolina State University research team mined 150 open-source software (OSS) and 123 proprietary projects to investigate how continuous integration (CI) influences software quality, specifically bug and issue resolution.
Key findings of the study demonstrate that CI adoption generates very different outcomes for OSS projects, versus proprietary projects. The data showed that OSS projects resolve more issues and fix bugs five times faster, however, the adoption of CI in proprietary projects has no influence on issue resolution. Additionally, commit frequency significantly increases for OSS projects, but not for proprietary projects. This suggests that developers working on proprietary projects are not leveraging CI for rapid feedback, and as a result, the commit frequency does not increase significantly, nor does the number of closed bugs and issues.
The authors provide several explanations for the different outcomes between OSS projects and proprietary projects, most of which focus on cultural issues. In sum, their position is that delivery teams should not use tools from another community if the practices that are associated with those tools are also not adopted.
The Zurich team provides an alternative perspective in Context Is King: The Developer Perspective on the Usage of Static Analysis Tools. They find that while developers use code quality tools in fully automated environments, there are obstacles that prevent them from using these tools more consistently and more effectively.
One observation is that developers tailor warnings from automated software quality tools in different contexts. For example, they will handle these warnings differently in local programming vs. code review vs. continuous integration because they have a clear idea which warnings are most important in each context. When programming in the IDE, they observe warnings related to code structure and logic; when performing code reviews, they mainly look at style conventions and redundancies; during CI, they watch handling errors and homogenize code logic and concurrency.
The Zurich team found that there are better ways to assist developers by improving existing warning selection and prioritization strategies. They propose that context and severity matter to delivery teams when deciding on what to fix; however current tools are not aware how the development team is trying to use them or provide holistic insight at the application level to effectively discern severity.
Does Automation Promote Sound Engineering Practices in Continuous Integration?
Based on the insight from these studies, automation can promote sound software engineering principles in certain situations. Delivery team outcomes are improved by Agile and DevOps practices in communities and cultures that already value ‘good software.’ The data indicate that in proprietary environments, speed and the perception of doing Agile and DevOps supersedes a commitment to deliver better software.
But what about the organization? Does the investment in Agile and DevOps improve business outcomes? Based on the behavior of delivery teams in CI, the impact to the business is still limited. Currently, the outcome of Agile and DevOps practices is focused on developer efficiency. The use of continuous code quality and refactoring are concentrated on improving readability, code hygiene, and passing tests. There are cultural, emotional and technical obstacles that prevent the business from fully benefitting from these practices.
The book Accelerate, published by DORA, emphasizes this point,
"…medium performers do worse than low performers on Change Fail Rates. Medium performers spend more time on unplanned rework than low performers – because they report spending a greater proportion of time on new work…occurring at the expense of ignoring critical rework, thus racking up technical debt which in turns leads to more fragile systems and, therefore, a higher change fail rate.”
Essentially, teams that are adopting DevOps practices are more focused on automation and process changes than ensuring they deliver better software. It is not until teams accumulate significant technical debt that they attempt to pay back technical debt.
I agree with the Zurich team’s suggestion that context matters. Providing code quality feedback and guidance to developers on the specific files they are working on is essential, it’s like spell check in word. But what’s missing is context into how that code affects and is affected by other code.
Continuous Integration Context Matters: Contextual Software Analysis
Understanding how components interact, across programming languages and applications layers and databases great increases the developers understanding of the impact of their local changes. Studies have demonstrated that system-level vulnerabilities are significant, as Cutter Consortium explains in Mitigate Business Risk and Unlock Software Potential with Contextual Software Analysis:
“Typically, in deployment, there are numerically fewer system-level defects than single-component defects. However, the complexity involved in finding, understanding, and then fixing system-level problems is much higher. For instance, results from a study by Zude Li et al. suggests that defects involving several components “account for approximately 30% of all defects but require 60% of corrective changes and over 80% of the correction effort. Moreover, severe system problems in deployed systems — security breaches or system outages, for instance — are generally caused by system-level flaws, not component-level problems. As you might have guessed, it turns out that more business risk is involved in system-level problems than unit-level problems.”
One obstacle preventing organizations from addressing this issue is that current DevOps doctrine side-steps detailed guidance into how architecture, systems integration and testing fit into the process. Perhaps its related to Martin Fowler’s observation that DevOps is focused on IT Delivery and speed.
As DevOps practices evolve and scale to include larger, older systems, teams will need to consider how to incorporate system-level analysis into an automated practice. This is not to say that some organizations are not doing this today, they are. But they are also making conscious decisions to disrupt release cycles, value system level issues as important, and measure characteristics of the software itself with the goal of understanding and managing software-relate
Opinions expressed by DZone contributors are their own.