Continuous Delivery - TW Live 2011
Join the DZone community and get the full member experience.
Join For FreeIn this issue, we look at a number of these techniques. Jez Humble, author of "Continuous Delivery", writes about the impediment to rapid deployment that can result from over-complex change management processes. Martin Fowler and Mike Mason discuss Feature Branching, a common development discipline that dramtically reduces the ability to create always-ready-to-release software. Steve Morgan reports on TW Live 2011, the first ThoughtWorks customer conference focused on Continuous Delivery. Lastly, we present a case study illustrating how we implemented an automated deployment pipeline for a global retailer.
Face to face
See video
The Client: Global retailer
A global
retailer engaged ThoughtWorks to overhaul its software deployment
pipeline. Within weeks, significant reliability improvements began, and
within just a matter of months, releases were predictable and frequent,
enabling the business to achieve a key strategic goal.
Changes
The transformation began with the
introduction of Continuous Integration, and with leveraging automation
for both testing and environment configuration. We modularized multiple
large, monolithic builds that sometimes took days to complete – or to
uncover problems – into five smaller builds that follow a logical,
pre-determined order. These smaller stages – Commit, Assemble, Package,
Deployment, and Regression – have meant faster, less risky, builds. The
technical staff is better able to divide responsibilities for
maintaining successful builds, increasing ownership across the IT
organization; and the root cause of failure is easier to pinpoint,
correct, and prevent.
Scripts now automate environment and configuration management to speed these complicated and lengthy processes, remove errors, and ensure consistency. Virtualization is a key component of ongoing improvements, allowing any application to be dynamically provisioned, configured, and available for testing based on the specific code modules being changed.
In addition to infrastructure and code integration, testing is now automated to a much higher degree than had been the case; and infrastructure – not just code – is part of the testing. Virtualization allows building and testing of components to be parallelized to a great extent. The pipeline has also become ever more intelligent, with tooling, for example, when a stage successfully completes, it communicates what happened to other stages, which may trigger further testing, or cut short needless testing if a problem is raised.
Every night, an automatic deploy process kicks off for all applications. Regression test suites are triggered and the results are distributed and displayed on dashboards available not only to IT but to business representatives. And because the infrastructure itself is part of the test suites, the client is assured that a successful deployment test means the code can be delivered into production with no errors, no conflicts – no surprises. Everyone knows what is deployable, and what is not.
Outcomes
Within a few months after the start
of the engagement, the client was able to begin realizing benefits. It
was able to launch a new brand online in a timeframe that would have
been impossible to achieve before. In a few more months, the release
cycle was cut from yearly to monthly with increased quality and
production rollbacks, once typical, have become a rare exception.
The business now has confidence they can get changes released with a drastically shorter lead-time. Overall value from IT has increased as timeframes have decreased. Going forward, advanced dynamic virtualization techniques will further cut the testing cycle time by another 50 percent or more, bringing them close to true continuous delivery capability.
From http://www.thoughtworks.com/perspectives/30-06-2011-continuous-delivery
Opinions expressed by DZone contributors are their own.
Comments