Continuous Integration, Delivery and Deployment: Key Differences
Continuous Integration, Delivery and Deployment: Key Differences
You've heard of continuous integration, delivery, and deployment. Make sure you understand the core of these three essential practices in modern software development.
Join the DZone community and get the full member experience.Join For Free
DevOps involves integrating development, testing, deployment and release cycles into a collaborative process. Learn more about the 4 steps to an effective DevSecOps infrastructure.
Continuous Integration, Continuous Delivery, and Continuous Deployment are key modern practices in software development. These techniques help us reduce integration problems, automate quality assessment and make deployments much more predictable and less error-prone, allowing us to deploy easily and frequently. But… Do you really know the differences between the three?
In this article, these three techniques will be reviewed and its differences explained. For further details and explanatory diagrams, please refer to the original article: Continuous Integration, Delivery, and Deployment: Key Differences.
The concept of Continuous Integration (CI) was originally proposed by Grady Booch in 1991 and later integrated into Extreme Programming. From then on, especially thanks to the Agile Software Development movement (as well as DevOps culture), the technique has been widely adopted in the industry.
The core proposition of Continuous Integration is associated with Version Control Systems (VCS). It was originally described as a simple technique, which consisted of integrating developers’ work (their working copies or branches) to the mainline (trunk in subversion or master branch in git) at least once a day.
The idea behind these daily integrations to the mainline is to reduce integration problems, which are usually caused by the complexity of merging the work of developers that have been working isolatedly for a while. By integrating daily or after each commit, the complexity of the merge process is drastically reduced.
Extended Continuous Integration
Although the core proposition of CI, described above, has value on itself, the definition of Continuous Integration was soon broadened. Nowadays, it often implies the existence of a CI Server (such as Jenkins) that, once a new change is made to the mainline of the VCS, executes a Continuous Integration Pipeline. These pipelines are formed of different stages, executed sequentially on every integration. Their goal, as Martin Fowler states, is to verify every integration with an automated build, to detect integration errors as quickly as possible. These builds include at least the compilation of the source code and the execution of unit tests, although in many cases other stages such as packaging, execution of integration / end-to-end tests and static code analysis are also included in these Continuous Integration Pipelines.
In this extended definition, the pipeline can either finish successfully or a failure can be produced in any of the stages (tests failing, static analysis not passing a defined threshold...). Usually, in the event of a failure, a notification email is sent to the person that created the last commit (probably the one causing the failure). In such cases, in order to really get the benefits of Continuous Integration, it should be a priority for the team to keep the Continuous Integration status green (passing) instead of red (failing), fixing any problem as soon as it is detected (ideally in less than 10 minutes).
Following this approach, automated builds with several checks are performed to our code every time we integrate changes, allowing us to detect any issues in an early stage. This quick detection of problems makes fixing them much cheaper than in traditional approaches. When these automated builds are not in place, detection of problems tends to occur much later, often during QA or deployment phases, and many times in production, after having deployed the new version. In those cases, days, weeks or even months have passed and developers have switched context, making it difficult for them to remember the particular cases they were working on when the problem was produced. Moreover, the higher pressure for deployment deadlines at those later stages usually leads to poorer solutions, that not only hinder code quality (reducing its readability and maintainability) but also tend to introduce new bugs.
Continuous Delivery vs Continuous Deployment
Once Continuous Integration is set, if we decide to continue improving our processes, the next step would be automating deployments to production, making them faster and safer. Here is where the two remaining techniques (Continuous Delivery and Continuous Deployment) subtly differ.
Continuous Delivery was described by Jez Humble, as “the ability to get changes of all types […] into production, or into the hands of users, safely and quickly in a sustainable way”. Continuous Delivery does not necessarily involve deployment to production on every change. We just need to ensure that our code is always in a deployable state, so we can deploy it easily whenever we want. On the other hand, Continuous Deployment requires every change to be always deployed automatically, without human intervention.
Thus, as can be seen in the image above, if we decide to enhance the pipeline so that, once the Continuous Integration stages are completed, the new artifact is automatically deployed to production, we talk about Continuous Deployment. On the other hand, if we manage to automate everything, but decide to require a human approval in order to proceed with the deployment of the new version, we are talking about Continuous Delivery. The difference is subtle, but it has huge implications, making each technique appropriate for different situations.
When Is Continuous Deployment Recommended? When Should We Opt for Continuous Delivery?
In general, Continuous Deployment is great for B2C products, since as consumers we are used to the constant change of software products, usually assuming their changes without major problems. In fact, consumer companies such as Facebook or Netflix follow this approach, deploying small changes several times a day to production.
However, in B2B products as well as in government projects, it is often necessary to include human control to activate deployments to production. In these cases, our changes may affect people and processes in other companies or departments, making it important for us to announce release dates with enough time, so everybody is able to update their processes, learn to use the new features we are about to release or even adapt their software to our API changes. In this context, applying Continuous Deployment (deploying automatically every change to production) could make other software crash, prevent people from doing their job or even lead to economic and legal issues. That is why for these cases, in which we have to set a fixed deploy date, Continuous Delivery is the technique of choice. Following this approach, we can also automate the whole process, but we provide human control to execute deployments to production, thus controlling when the new version is released.
As mentioned above, these processes are key elements in modern-day software development and provide a significant competitive advantage to software companies applying them. As we have seen above, depending on the software being developed and its usage, we may not be able to opt for Continuous Deployment, being Continuous Delivery the alternative of choice. However, Continuous Integration is the essential practice that serves as a basis for the other two, making it the preferred choice to start off with.
For further details and explanatory diagrams, please refer to the article Continuous Integration, Delivery, and Deployment: Key Differences.
Published at DZone with permission of Romen Rodriguez-Gil . See the original article here.
Opinions expressed by DZone contributors are their own.