The Continuous Delivery Toolchain
Join the DZone community and get the full member experience.Join For Free
unless your software is very simple, no single tool, automation product, or deployment pipeline implementation will provide you with continuous delivery. effective continuous delivery requires an organizational understanding of the intent and purposes of the activities you undertake, not merely the automation of those activities. however, continuous delivery is impossible without some core capabilities that tools provide. each section below will examine a link in the core continuous delivery toolchain with examples from each tool category in the boxed sections.
orchestration and deployment pipeline visualization
orchestration tools are the backbone of any cd system. they allow teams to build an effective sequence of deployment pipeline steps by integrating with their entire toolchain. these tools can also provide visualization utilities, which are important for enabling the full involvement of stakeholders from all departments. for pipeline orchestration and visualization, you can use a dedicated deployment pipeline tool or you can use an application release automation (ara) solution. whichever direction you take for your orchestration tool, you should be sure that it helps your team detect and expose delays at each stage of the pipeline, including wait times between stages. visualization and orchestration working in tandem allow teams to quickly identify the places they should optimize first.
dedicated deployment pipeline tools: jenkins (with plugins or through cloudbees), thoughtworks go, atlassian bamboo
ara: electriccommander, ca lisa, ibm urbancode, xebialabs xl
orchestration engine: maestrodev, collabnet
most software development teams use a version control system for the files they consider source code. however, many organizations forget to include configuration files, such as the configuration that defines the build and release system. all text-based assetsshould be stored in a version control system that everyone can easily access. the code changes should be very easy to review, line-by-line (ideally in a web browser), with a pull or merge request.
version control: git, mercurial, perforce, subversion, tfs
ci tools can support orchestration and visualization, but their core functionality is to integrate new code with the stable main release line and alert stakeholders if any new code would cause issues with the final product. this makes it easy for teams to combine work on different features while keeping a master code branch ready for release. it should feel natural to integrate many times a day. teams should also connect a code metrics and profiling utility that can stop integrations if certain metrics reach an undesirable threshold.
ci: jenkins, travis ci, thoughtworks go, circleci, jetbrains teamcity, atlassian bamboo
code metrics: sonarqube, sloc (and variants), scitools understand
packaged artifacts, rather than the application’s raw source code, are the focus of deployment pipelines. artifacts are assembled pieces of an application that include packaged application code, application assets, infrastructure code, virtual machine images, and (typically) configuration data. artifacts are identifiable (unique name), versioned (preferably semantic versioning), and immutable (we never edit them). together, these artifacts allow developers to build a bill of materials (bom) package describing the exact versions of all the artifacts in a particular version of their software system. package metadata identifies when and how the package was tested or deployed into a particular environment.
artifact management is most effective with an artifact repository manager. artifact repositories contain a complete artifact usage history, similar to the way version control systems track source code changes. they use dependency resolution between the package versions and allow the system to build a dependency graph from the hardware all the way up to the user interface of the application. the ability to verify dependencies through an entire system is powerful for tracking exactly what was (or will be) tested or deployed.
version control systems: git, subversion, mercurial
artifact repository managers: archiva, artifactory, nexus, or roll-your-own with zip files, metadata, shared storage, and access controls
language-specific package managers: composer (php), ruby gems, npm (node.js), python pip
os-level package managers: apt, chocolatey, rpm
test and environment automation
the only manual testing in a deployment pipeline should be for tests that are tough for a computer to handle, such as exploratory testing, inspection of user interface designs, and user acceptance tests. the rest should be automated. tools for automated testing should operate in a completely headless (non-interactive) manner and be lightweight enough to run across many test servers simultaneously. teams also need to create testing environments on-demand by using environment automation tools that can provision a vm and configure an environment template.
test automation: jmeter, selenium/webdriver, cucumber (bdd), rspec (bdd), specflow (bdd), loadui (performance), pagespeed (performance), netem(network emulation), soapui (web services), test kitchen (infrastructure)
environment automation: vagrant, docker, packer
server configuration and deployment
current deployment tools support three models:
push model: manages the distribution and installation of packages to multiple remote machines. it’s a good choice for smaller systems because it's usually simple and quick.
pull model: requires an infrastructure configuration tool such as chef or puppet. supporters say it scales better than push and like that it treats the deployment of application code as another step in configuring the infrastructure.
hybrid model: uses a push tool to trigger a pull client on target servers.
for any of the three models, your team must ensure that the process is fully automated, provides detailed information with standard output and error messages, and allows easy and rapid rollback to a stable state.
push deployment: capistrano, fabric, thoughtworks go, msdeploy, octopus, rundeck, various ci and build tools, various ara tools
pull deployment:ansible, chef, cfengine, puppet, salt.
monitoring and reporting
monitoring your system logs is essential for spotting problems and halting the deployment pipeline. rather than manually collecting logs from each machine in an environment, logs should be shipped to a central store that indexes them and makes them available for searching via a web browser. this is a crucial capability for a continuous delivery environment. the log store should be connected to all environments (including the developer's system) to speed up problem diagnosis and resolution. most monitoring tools should work in a dynamic infrastructure and integrate with yours through scripted configuration.
log aggregation & search: fluentd, graylog2, logstash, nxlog, splunk
metrics, monitoring, audit: collectd, ganglia, graphite, icinga, sensu, scriptrock
a final look at some guiding principles for tools
it’s important to understand all the capabilities your team needs before selecting the tools to build your continuous delivery system. the following list details the key areas you should always keep in mind when selecting tools:
visibility: look for tools that have clear, comprehensive visualizations for everything that your organization needs to track.
traceability: select tools that will allow you to easily track important metadata from your source code, infrastructure code, binary artifacts, application and infrastructure configuration, vm images, and the deployment pipeline configuration.
full coverage: tools must cover all of the applicable environments in order for delivery to be consistent. if a tool is too expensive to have in every environment, you shouldn’t choose it.
in addition to these tool selection suggestions, you should consider using separate tools for ci and visualization/orchestration, because these capabilities have different requirements. you should also ensure that versioned, traceable artifacts are the key unit of currency for the deployment pipeline. avoid simplistic linear sequential pipeline stages where parallel flows would better meet business needs. finally, insist on a system that tracks, measures, and visualizes the flow of artifacts toward production so that all stakeholders can effectively engage with your software production process.
Opinions expressed by DZone contributors are their own.