Over a million developers have joined DZone.

Deployment Pipelines in Amazon Web Services – An Introduction

· Cloud Zone

Download the Essential Cloud Buyer’s Guide to learn important factors to consider before selecting a provider as well as buying criteria to help you make the best decision for your infrastructure needs, brought to you in partnership with Internap.

This is the first article in a series that will be describing the typical stages and steps we define when implementing deployment pipelines for our customers. This first article introduces what will be covered in more detail in other articles in this series.

All we do at Stelligent is implement Continuous Delivery in Amazon Web Services (AWS). This also means we embrace DevOps principles. While many of the practices are relevant to any provider (in the cloud or within data centers), I’m using some specific AWS references in this article.

The goal of continuous delivery is to have always-releasable software – based on recent good changes and at any point in time. Unlike continuous deployment, it doesn’t mean you’re actually releasing the software with every good change; it just means that you can release the software with every good change.

To do this, you automate every step of the release process into a deployment pipeline – the key architectural construct of Continuous Delivery. In a deployment pipeline, you group these steps into stages. You might still employ manual activities such as exploratory testing, system governance or development testing into the pipeline, but the execution of all of these steps are automated.

At a coarse-grained level, there are four components that make up a software system: application/service code, configuration, infrastructure and data. Often, there are many other resources used to create this software system (e.g. builds, tests, static analysis, etc.), but these four components are the ones that ultimately get delivered to users.

While the number of deployment-pipeline stages may vary per implementation, we often begin with a set of predefined stages and then customize these stages based on our customers’ requirements. The stage and step names are predefined, not the implementation. Some of our stage names are based on the Continuous Delivery book, while others are based on stages we include on many of our implementations. To be clear, you can call these stages whatever you want (on one project we just numbered the stages). We use these names because they’re contextual and easier to explain to others. What’s more, bootstrap and self-service aren’t “stages” in the sequential sense, but will become more obvious throughout the article series. The stage names we typically start out with are:

  • 0-bootstrap
  • 1-image
  • 2-commit
  • 3-acceptance
  • 4-self-service
  • 5-exploratory
  • 6-capacity
  • 7-pre-production
  • 8-production

The stages and steps are illustrated in the diagram below.

Continuous Delivery in AWS at Stelligent

Continuous Delivery in AWS at Stelligent

The high-level purpose of each of the deployment-pipeline stages is described below.

The purpose of the bootstrap “stage” is to be able to execute single-command operations to launch system resources. For example: launch a Virtual Private Cloud (VPC) network, launch a local development environment (e.g. Vagrant), launch a Continuous Integration (CI) environment, run a self-service deployment and launch the support infrastructure.

The image stage in the deployment pipeline is triggered by an infrastructure or configuration commit. The output is usually an Amazon Machine Image (AMI).

The commit stage in the deployment pipeline is triggered when polling a version-control repository and, in which, there’s a non-infrastructure code commit (application code, configuration, data, etc.). The commit stage builds the software, runs unit tests and static analysis, and stores the software distribution in a repository. If successful, it immediately triggers the acceptance stage of the pipeline.

The acceptance stage in the deployment pipeline is triggered by a successful commit stage. It launches an environment (from the image stage) and runs longer-running tests against this environment. The output is a new AMI with the application/service deployed to the image stage AMI. If successful, it immediately triggers the both the exploratory stage and the capacity stages of the pipeline.

The purpose of a self-service deployment is to provide a pull-based system in which any authorized team member can access their own scaled-down environment running the software system. The general assumption is that a self-service deployment uses the environment generated as part of the acceptance stage.

The exploratory stage in the deployment pipeline is triggered by a successful acceptance stage. In the exploratory stage, any authorized person can click a button to run a self-service deployment. Once exploratory tests have run, the user can approve or reject this deployment-pipeline stage.

The capacity stage in the deployment pipeline is triggered by a successful acceptance stage. It verifies that the application/service can run under expected and peak stress and loads. If successful, it triggers a notification to the pre-production stage of the pipeline to begin once the exploratory stage is complete (and vice-versa).

The pre-production stage in the deployment pipeline is triggered by successful capacity and exploratory stages. In this stage, the environment is scaled to the size of production. If successful, it’s made available to the production stage of the deployment pipeline.

The production stage in the deployment pipeline is triggered by a successful pre-production stage. In this stage, an authorized team member can click a button to deploy to production – if there’s a business need to do so. If successful, it immediately deploys the software system to production.

That’s a high-level view on the various stages we tend to include in our deployment pipelines in AWS. In the next article, I’ll go over the bootstrap “stage” in greater detail. Stay tuned!

The Cloud Zone is brought to you in partnership with Internap. Read Bare-Metal Cloud 101 to learn about bare-metal cloud and how it has emerged as a way to complement virtualized services.

Topics:

Published at DZone with permission of Paul Duvall, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}