Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Steps to Creating an Effective DevOps-Focused Deployment Pipeline

DZone's Guide to

Steps to Creating an Effective DevOps-Focused Deployment Pipeline

Learn about the core stages of continuous delivery, how you can effectively use a Kanban board within your team, and more.

· DevOps Zone
Free Resource

“Automated Testing: The Glue That Holds DevOps Together” to learn about the key role automated testing plays in a DevOps workflow, brought to you in partnership with Sauce Labs.

Continuous Delivery is a set of practices and principles aimed at building, testing, and releasing software faster and more frequently. If you're lucky enough to start out in a "greenfield" organization without an established coding culture, it's a good idea to try to create and automate your software delivery pipeline upfront. If you're successful out-of-the-gate in creating a Continuous Delivery pipeline, your business will be much more competitive since you'll be able to get higher-quality software into the hands of your users and customers faster than your competitors, and you'll be able to react to business demand and change much more rapidly.

If you are adding a Continuous Delivery (CD) pipeline to an existing organization, where you start depends on your current development and testing practices and the bottlenecks in your software delivery process. These bottlenecks can include slow, error-prone manual processes as well as poor-quality, big-bang rollouts that fail in production, leading to unhappy users. 

There are several ways to get a handle on the current state of your deployment processes, including using workflow visualization tools like flowcharts and business process maps to break down and understand your current delivery processes. One of the simplest visual process management tools you can use to help make these kinds of decisions is a Kanban board.

Kanban boards, like the one pictured below, are typically just sticky notes on a whiteboard that are used to communicate project status, progress, and other issues.

Source: Jeff.lasovski

Many organizations today are also experimenting with Value Stream Maps (VSMs) to better understand the infrastructure changes needed to automate their software delivery process. Borrowed from the lean manufacturing camp, a VSM is a technique used to analyze value chains, which are the series of events required to bring a product or service to a consumer.

A Value Stream Map, like the one pictured below, shows both material and information flow. Not only does it show process flow but it includes data associated with each process such as inventory between processes. It describes the method by which material moves from one process to another, and information flows between Production Control (a central production scheduling or control department, person or operation) and various processes, suppliers, and customers, including customer demand. 

Current State Value Stream Map with Environmental, Health and Safety (EHS) Data

Source: US EPA Lean and Environment Toolkit 

Mary and Tom Poppendieck, who adapted concepts of lean manufacturing and Value Stream Mapping to the software development process in their highly-regarded book, Implementing Lean Software Development, stress the importance of starting and ending with real customer demand. This means that a software organization that delivers multiple software products, such as a company doing game development, may want to use a VSM to optimize the delivery process for a popular product that brings in more revenue to the company, before adapting the new process to less-popular products.

Building a successful CD pipeline means creating a DevOps culture of collaboration among the various teams involved in software delivery (developers, operations, quality assurance teams, management, etc.), as well as reducing the cost, time, and risk of delivering software changes by allowing for more incremental updates to applications in production. In practice, this means teams produce software in short cycles, ensuring that the software can be reliably released at any time.

How short can you make your cycles?  It depends on the degree of collaboration and trust you can build among the teams involved, as well as the amount of resources and time you devote to automating your delivery process. You can also use Value Chain Mapping to measure your progress in creating a CD pipeline, which can be done in two steps:

  • The first step measures the efficiency of the different build, deploy and test stages of the current state of your software delivery. In doing time measurements (in whatever metric you choose:  minutes, hours, or days), you initially try to determine execution time and the wait time in each step. Wait time, in this case, is any non-value added activity such as handoffs, signoffs, manual processes, or delays caused by hardware and software issues.
  • The second step measures the efficiency of the different build, deploy, and test stages your software delivery target state.  As you remove non-value added activity by implementing the core stages of DevOps Continuous Delivery (Continuous Integration, Test Automation, Continuous Deployment, etc.) , you'll then be able to measure your progress in implementing your CD pipeline.

It may seem as if you're trying to hit a moving target if the VSM target state is not fully continuous or automatic, but that's acceptable because this approach provides a clear and measurable improvement path toward a CD pipeline that should highlight many or most of the bottlenecks in your current software delivery approach. 

(Check out Asheesh Mehdiratta's use of Value Stream Maps to build a dashboard that highlights the efficiency gains possible in using Continuous Delivery to remove waste from the software production process.)

Core Stages of Continuous Delivery

Because large and slow software releases make for buggy and unreliable code, Continuous Delivery pipelines rely on frequent releases of smaller amounts of functionality.  A typical CD pipeline can be broken down into the following sequence of stages:

Stage One: Build Automation

Build automation is the first stage in moving toward implementing a culture of Continuous Delivery and DevOps. If your developers are practicing test-driven development (TDD), they'll write unit tests for each piece of code they write, even before the code itself is written.  An important part of the agile methodology, TDD helps developers think through the desired behavior of each unit of software they're building, including inputs, outputs, and error conditions. New features implemented by developers are then checked into a central code base prior to the software build, which compiles the source code into binary code. With build automation, the software build happens automatically, using tools such as Makefiles or Ant, rather than when a developer manually invokes the complier. 

Stage Two: Continuous Integration

In Continuous Integration, developers check code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect errors and conflicts as soon as possible. Originally one of the fundamental practices outlined in the Extreme Programming (XP) methodology pioneered by developers like Martin Fowler, Continuous Integration (CI) has become an essential ingredient for teams doing iterative and incremental software delivery.

Stage Three: Test Automation

The next stage in implementing your deployment pipeline is test automation. Manual testing is a time-consuming and labor-intensive process and, in most cases, also a non-value added activity since you're only trying to verify that a piece of software does what it’s supposed to do.

If developers are integrating code into a shared repository several times a day, testing needs to be done continuously as well. This means running unit tests, component tests (unit tests that touch the filesystem or database), and a variety of acceptance and integration tests on every check-in. Use the following Agile Testing Quadrants matrix (developed by Brian Marick and refined by Lisa Crispin) to prioritize the many different types of tests that you intend to automate in your CD pipeline. There are no hard and fast rules about what tests (performance, functionality, security, user acceptance, etc.) to automate, when to automate them, or even whether certain manual tests really need automation. Crispin and other agile testing experts favor automating unit tests and component tests before other tests since these represent the highest return on investment. 

Source: TechTarget 

Stage Four: Deployment Automation

In the last stage of the pipeline, once an application passes all the required tests, it's then released into production. For all intents and purposes, this means releasing every good build to users. The upside of deployment automation is that it allows delivery of new functionality to users within minutes whenever it's needed, as well as instant feedback to the DevOps team that, in turn, allows them to respond rapidly to customer demand.

Because deployment is carried out automatically as software moves through the delivery pipeline and DevOps teams are trying to quickly respond to a variety of customer requests, the downside of automated deployment is downtime caused by software that may not be production-ready. To achieve zero downtime, DevOps teams commonly take advantage of a couple of different deployment strategies for their deployment pipelines: “canary releasing” and “blue-green deployment.”

Canary releasing is an allusion to the caged birds that early-day miners would carry down into mine tunnels with them to check to see if the air was safe to breathe or not. This strategy involves releasing the next version of your software into production, but only exposing it to a small percentage of your user base. After it passes a number of environmental tests, you then release it to more servers in your infrastructure and route more users to it.

The blue-green deployment strategy involves setting up the latest version of your application on an identical clone of your production application stack, and then switching traffic from the current production stack to the new one as soon as the application passes all the manual and automated tests in your deployment pipeline. This ties into the concept of using cloud resources and virtual infrastructure to setup and manage your deployment pipeline.

A fully automated deployment pipeline requires the ability to deploy and release any version of a software application to any environment. Doing this effectively requires infrastructure automation, where environments (machines, network devices, operating systems, middleware,  etc. ) can be configured and specified in a fully automatable format.

Infrastructure as Code (IaC) has become increasingly widespread with the adoption of cloud computing and Infrastructure as a Service (IaaS), which relies on virtual machines and other resources that cloud providers like Amazon Web Services and Microsoft Azure supply on-demand from their large pools of equipment installed in data centers. IaC lets DevOps teams automatically manage and provision IaaS resources using high-level programming languages. In practice, this means developers have the ability to control most or all IaaS resources via API calls, in order to do things like start a server, load balance, or start a Hadoop cluster. Many DevOps shops also let developers write or modify an IaC template to provision and deploy new applications instead of relying on operators or system administrators to manage the operational aspect of their DevOps environment.

Infrastructure as code is a powerful tool to help build an effective Continuous Delivery Pipeline. But it can also lead to deployment chaos if your organization doesn't have an established DevOps culture in place with the right tools such as DevOps test management and high-value workgroup qualities like teamwork and trust. Use the Value Steam Mapping approach outlined above to build the collaboration and trust you'll need to achieve the goal of creating a software deployment pipeline that delivers new features to users as fast and efficiently as possible.

Learn about the importance of automated testing as part of a healthy DevOps practice, brought to you in partnership with Sauce Labs.

Topics:
devops ,kanban ,continuous delivery ,automation

Published at DZone with permission of Sanjay Zalavadia, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}