Introducing Continuous Delivery
Join the DZone community and get the full member experience.
Join For Free[Editor's note: This article is featured in DZone's 2014 Guide to Continuous Delivery.]
“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” - The Agile Manifesto, First Principle
Since the publication of the seminal book Continuous Delivery by Dave Farley and Jez Humble in 2010, Continuous Delivery has become a widely discussed topic within the IT industry and an essential competitive advantage for technology companies such as Etsy, Facebook, and Netflix. But where did Continuous Delivery come from, what does it offer, and how does it work?
Beyond Continuous Integration
When Kent Beck published the inaugural eXtreme Programming paper in 1998, his proposal that developers should “integrate and test several times a day” was revolutionary. Enshrined in XP as the Integrate Often rule, the frequent integration of mainline code allows developers to rapidly discover integration problems and reduce development costs. Frequent integration has proven so successful over time that it is now a mainstream development practice known as Continuous Integration. However, since Continuous Integration is focused on development, it can only benefit a fraction of the end-to-end release process, which remains a high-risk, labor-intensive affair in a majority of IT organizations akin to the following:
Such a release process will likely involve extensive documentation, overnight scheduling, unversioned configuration management, ad hoc server management, and large numbers of participants. In this situation, software releases inevitably become high cost, high risk events susceptible to human error, and given prominent failures such as the Knights Capital $440 million glitch [1], there can be an understandable reluctance to release software frequently. However, there is always an opportunity cost associated with not delivering software. This was recently highlighted by reports of Microsoft’s decade of e-book/smartphone opportunity costs [2]. This poses a seemingly intractable business problem for many organizations – how can the risk of delivering software be reduced, while simultaneously delivering new features to customers faster?
Continuous Delivery
Inspired by the Agile Manifesto stating “our highest priority is to satisfy the customer through early and continuous delivery of valuable software,” Continuous Delivery is a method that advocates the creation of an automated deployment pipeline to release software rapidly and reliably into production. The goal of Continuous Delivery is to adopt a holistic end-to-end delivery perspective and optimize cycle time – the average time between production releases – so that development costs are lowered, the risk of release failure is minimized, and customer feedback loops are faster. The result is an automated release workflow similar to the illustration below:
In order to guide Continuous Delivery adoption within an organization, Dave Farley and Jez Humble defined the following principles:
-
Repeatable, Reliable Process: use the same deterministic release mechanism in all environments.
-
Automate Almost Everything: automate acceptance testing, deployment tasks, configuration management, etc.
-
Keep Everything In Version Control: store all code, configuration, schemas, etc. in source control.
-
Bring Pain Forward: shrink feedback loops for time-consuming, error-prone operations.
-
Build Quality In: fix defects in development as soon as they occur.
-
Done Means Released: do not consider features complete until released to production.
-
Everybody Is Responsible: align teams and individuals with the release process.
-
Continuous Improvement: continuously improve the people and technology involved.
These principles are promoted by the deployment pipeline pattern, which has been described by Dave Farley and Jez Humble as “Continuous Integration taken to its logical conclusion” [3] and lies at the heart of Continuous Delivery. A deployment pipeline is an automated implementation of the build/deploy/test/release cycle that enables self-serviced releases of any application version into any environment. A diagram of a typical deployment pipeline can be found in a free online chapter of Continuous Delivery [4]. A real world pipeline, however, will be specialized to an organization’s requirements in the software delivery process.
In the basic deployment pipeline pattern, the commit stage is triggered by a source code or configuration change. This stage includes code compilation, unit tests, static analysis checks, and assembly of the application binaries for the binary repository. A successful run automatically triggers the acceptance stage, which runs the automated acceptance tests against that application version. If the acceptance tests pass, that application version can progress to manual exploratory testing, automated capacity testing, and production usage. This closely follows the deployment pipeline best practices:
-
Build Your Binaries Only Once: create immutable binaries to eliminate recompilation errors.
-
Deploy The Same Way: use the same automated release mechanism in each stage.
-
Smoke Test Deployments: assert deployment success prior to usage.
-
Deploy Into Production Copy: create a production-like test environment for testing.
-
Instant Propagation: make an application version automatically available to the next stage upon success in the previous stage.
-
Stop The Line: when an application version fails on a certain stage, automatically stop its progress through the pipeline.
The creation of a deployment pipeline establishes a pull-based release mechanism that reduces development costs, minimizes the risk of release failure, and allows the production release process to be rehearsed thousands of times. It provides visibility into the production readiness of different application versions at any point in time, and drives continuous improvement of the release process by identifying bottlenecks in the system.
Organizational Change and DevOps
While a deployment pipeline reduces costs and risk, the reality is that Continuous Delivery is vulnerable to Conway’s Law [5] and dependent upon organizational change to achieve a significant reduction in cycle time. In the majority of organizations, a deployment pipeline will be used by non-aligned siloed teams, meaning that lead times will be substantially inflated by handover delays between teams regardless of pipeline execution time.
Dave Farley and Jez Humble have repeatedly warned that “where the delivery process is divided between different groups… the cost of coordination between these silos can be enormous,” [3] and this is reflected in the Everybody Is Responsible and Done Means Released principles listed above. Testing and operational tasks must become intrinsic development activities rather than discrete work phases, and the people involved must be integrated into the product development team in what is often a slow and painstaking process of change. It is for this reason that the parallel growth of the DevOps philosophy has been welcomed by the Continuous Delivery community.
DevOps has been defined by Damon Edwards as “aligning development and operations roles and processes in the context of shared business objectives,” [6] and aims to increase communication and collaboration between IT divisions such as development and operations. Continuous Delivery and DevOps have evolved independently, but are interdependent upon one another – a deployment pipeline can act as a focal point for DevOps collaboration, and the DevOps integration of Agile principles with operations practices can eliminate the handover delays between development and operations teams.
Conclusion
While Continuous Integration has become a mainstream development practice, Continuous Delivery goes much further and is poised to become a mainstream IT practice. By creating an automated deployment pipeline, a repeatable and reliable delivery mechanism enables an organization to increase product revenues by releasing new features to customers more frequently without fear of failure. However, Continuous Delivery is far more reliant upon organizational change than technology change in order to truly be successful.
[1] http://money.cnn.com/2012/08/09/technology/knight-expensive-computer-bug
[2] http://www.vanityfair.com/business/2012/08/microsoft-lost-mojo-steve-ballmer
[3] http://www.amazon.com/dp/0321601912
[4] http://ptgmedia.pearsoncmg.com/images/chap5_9780321601919/elementLinks/fig5_4.jpg
[5] http://www.melconway.com/Home/Conways_Law.html
[6] http://dev2ops.org/2010/02/what-is-devops/
Bio:
Steve Smith is an Agile consultant and Continuous Delivery specialist at Always Agile Consulting Ltd. Steve is a regular speaker at Skills Matter for the London Continuous Delivery group, and has spoken at conferences such as Agile Horizons and QCon New York about Continuous Delivery.
Published at DZone with permission of Steve Smith, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Hibernate Get vs. Load
-
System Testing and Best Practices
-
Multi-Stream Joins With SQL
-
DevOps Pipeline and Its Essential Tools
Comments