Over a million developers have joined DZone.

Automating ECS: Orchestrating in CodePipeline and CloudFormation (Part 2)

The second part of this series gets into the gritty detail of setting up an automated ECS pipeline.

· Cloud Zone

Build fast, scale big with MongoDB Atlas, a hosted service for the leading NoSQL database on AWS. Try it now! Brought to you in partnership with MongoDB.

In my first post on automating the EC2 Container Service (ECS), I described how I automated the provisioning of ECS in AWS CloudFormation using its JSON-based DSL.

In this second, final part of the series, I will demonstrate how to create a deployment pipeline in AWS CodePipeline to deploy changes to ECS Docker images in the EC2 Container Registry (ECR).

In doing this, you’ll not only see how to automate the creation of the infrastructure but also automate the deployment of the application and its infrastructure via Docker containers. This way, you can commit infrastructure, application, and deployment changes as code to your version-control repository and have these changes automatically deployed to production or production-like environments.

The benefit is the customer responsiveness this embodies: You can deploy new features or fixes to users in minutes, not days or weeks.

Pipeline Architecture

In the figure below, you see the high-level architecture for the deployment pipeline.

   Deployment Pipeline Architecture   
With the exception of the CodeCommit repository creation, most of the architecture is implemented in a CloudFormation template. Some of this is the result of not requiring a traditional configuration management tools to perform configuration on compute instances.

CodePipeline is a continuous delivery service that enables you to orchestrate every step of your software delivery process in a workflow that consists of a series of stages and actions. These actions perform the steps of your software delivery process.

In CodePipeline, I've defined two stages: Source and Build. The Source stage retrieves code artifacts via a CodeCommit repository whenever someone commits a new change. This initiates the pipeline. CodePipeline is integrated with the Jenkins Continuous Integration server. The Build stage updates the ECS Docker image (which runs a small PHP web application) within ECR and makes the new application available through an ELB endpoint.

Jenkins is installed and configured on an Amazon EC2 instance within an Amazon Virtual Private Cloud (VPC). The CloudFormation template runs commands to install and configure the Jenkins server, install and configure Docker, install and configure the CodePipeline plugin and configure the job that’s run as part of the CodePipeline build action. The Jenkins job is configured to run a bash script that’s committed to the CodeCommit repository. This bash script updates the ECS service and task definition by running a Docker build, tag and push to the ECR repository. I describe the implementation of this architecture in more detail in this post.


In this example, CodePipeline manages the orchestration of the software delivery workflow. Since CodePipeline doesn’t actually execute the actions, you need to integrate it with an execution platform. To perform the execution of the actions, I'm using the Jenkins Continuous Integration server. I'll configure a CodePipeline plugin for Jenkins so that Jenkins executes certain CodePipeline actions.

In particular, I have an action to update an ECS service. I do this by running a CloudFormation update on the stack. CloudFormation looks for any differences in the templates and applies those changes to the existing stack.

To orchestrate and execute this CloudFormation update, I configure a CodePipeline custom action that calls a Jenkins job. In this Jenkins job, I call a shell script passing several arguments.

Provision Jenkins in CloudFormation

In the CloudFormation template, I create an EC2 instance on which I will install and configure the Jenkins server. This CloudFormation script is based on the CodePipeline starter kit.

To launch a Jenkins server in CloudFormation, you will use the AWS::EC2::Instance resource. Before doing this, you'll be creating an IAM role and an EC2 security group to the already provisioned VPC (the VPC provisioning is part of the CloudFormation script).

Within the Metadata attribute of the resource (i.e. the EC2 instance on which Jenkins will run), you use the AWS::CloudFormation::Init resource to define the user data configuration. To apply your changes, you call cfn-init to run commands on the EC2 instance like this:

"/opt/aws/bin/cfn-init -v -s ",

Then, you can install and configure Docker:

"# Install Docker\n",
"cd /tmp/\n",
"yum install -y docker\n",

On this same instance, you will install and configure the Jenkins server:

"# Install Jenkins\n",
"yum install -y jenkins-1.658-1.1\n",
"service jenkins start\n",

And, apply the dynamic Jenkins configuration for the job so that it updates the CloudFormation stack based on arguments passed to the shell script.

"/bin/sed -i \"s/MY_STACK/",
"/g\" /tmp/config-template.xml\n",

In the config-template.xml, I added tokens that get replaced as part of the commands run from the CloudFormation template. You can see a snippet of this below in which the command for the Jenkins job makes a call to the configure-ecs.sh bash script with some tokenized parameters.

<command>bash ./configure-ecs.sh MY_STACK MY_ACCTID MY_ECR</command>

All of the commands for installing and configuring the Jenkins Server, Docker, the CodePipeline plugin and Jenkins jobs are described in the CloudFormation template that is hosted in the version-control repository.

Jenkins Job Configuration Template

In the previous code snippets from CloudFormation, you see that I'm using sed to update a file called  config-template.xml. This is a Jenkins job configuration file for which I'm updating some token variables with dynamic information that gets passed to it from CloudFormation. This information is used to run a bash script to update the CloudFormation stack, which is described in the next section.

ECS Service Script to Update CloudFormation Stack

The code snippet below shows how the bash script captures arguments that are passed by the Jenkins job into bash variables. Later in the script, it uses these bash variables to make a call the update-stack command in the CloudFormation API to apply a new ECS Docker image to the endpoint.


uuid=$(date +%s)

In the code snippet below of the configure-ecs.sh script, I'm building, tagging, and pushing to the Docker repository in my EC2 Container Registry repository using the dynamic values passed to this script from Jenkins (which were initially passed from the parameters and resources of my CloudFormation script).

In doing this, it creates a new Docker image for each commit and tags it with a unique id based on date and time. Finally, it uses the AWS CLI to call the update-stack command of the CloudFormation API using the variable information.

eval $(aws --region us-east-1 ecr get-login)

# Build, Tag and Deploy Docker
docker build -t $ecr_repo:$uuid .
docker tag $ecr_repo:$uuid $awsacctid.dkr.ecr.us-east-1.amazonaws.com/$ecr_repo:$uuid
docker push $awsacctid.dkr.ecr.us-east-1.amazonaws.com/$ecr_repo:$uuid

aws cloudformation update-stack --stack-name $ecs_stack_name \ 
--template-url $ecs_template_url --region us-east-1 \
--capabilities="CAPABILITY_IAM" --parameters \ 
ParameterKey=AppName,UsePreviousValue=true \
ParameterKey=ECSRepoName,UsePreviousValue=true \ ParameterKey=DesiredCapacity,UsePreviousValue=true \ ParameterKey=KeyName,UsePreviousValue=true \ ParameterKey=RepositoryBranch,UsePreviousValue=true \ ParameterKey=RepositoryName,UsePreviousValue=true \ ParameterKey=InstanceType,UsePreviousValue=true \ ParameterKey=MaxSize,UsePreviousValue=true \ ParameterKey=S3ArtifactBucket,UsePreviousValue=true \ ParameterKey=S3ArtifactObject,UsePreviousValue=true \ ParameterKey=SSHLocation,UsePreviousValue=true \ ParameterKey=YourIP,UsePreviousValue=true \ ParameterKey=ImageTag,ParameterValue=$uuid

Now that you see the basics of install and configuring Jenkins in CloudFormation and what happens when the Jenkins is run through the CodePipeline orchestration, let's look at the steps for configuring the CodePipeline part of the CodePipeline/Jenkins configuration.

Create a Pipeline using AWS CodePipeline

Before I create a working pipeline, I prefer to model the stages and actions in CodePipeline using Lambda so that I can think through the workflow. To do this, I refer to my blog post on Mocking AWS CodePipeline pipelines with Lambda. I'm going to create a two-stage pipeline consisting of a Source and a Build stage. These stages and the actions in these stages are described in more detail below.,

There are five types of action categories in CodePipeline: Source, Build, Deploy, Invoke, and Test. Each action has four attributes: category, owner, provider, and version. There are codepipeline_ecsthree types of action owners: AWS, ThirdParty, and Custom. AWS refers to built-in actions provided by AWS. Currently, there are four built-in action providers from AWS: S3, CodeCommit, CodeDeploy, and ElasticBeanstalk. Examples of ThirdParty action providers include RunScope and GitHub.

If none of the action providers suit your needs, you can define custom actions in CodePipeline. In my case, I wanted to run a script from a Jenkins job so I used the CloudFormation sample configuration from the CodePipeline starter kit for the configuration of the custom build action that I use to integrate Jenkins with CodePipeline. See the snippet below.


The example pipeline that I’ve defined in CodePipeline (and described as code in CloudFormation) uses the above custom action in the Build stage of the pipeline, which is described in more detail in the Build Stage section later.

Source Stage

The Source stage has a single action to look for any changes to a CodeCommit repository. If it discovers any new commits, it retrieves the artifacts from the CodeCommit and stores them in an encrypted form in an S3 bucket. If it's successful, it transitions to the next stage: Build.

Build Stage

The Build stage invokes actions to create a new ECS repository if one doesn't exist, builds and tags a Docker image and makes a call to a CloudFormation template to launch the rest of the ECS environment — including creating an ECS cluster, task definition, ECS services, ELB, Security Groups, and IAM resources. It does this using the custom CodePipeline action for Jenkins that I described earlier.

The custom action for Jenkins (via the CodePipeline plugin) is looking for work from CodePipeline. When it finds work, it performs the task associated with the CodePipeline action. In this case, it runs the Jenkins job that calls the configure-ecs.sh script. This bash script makes a update-stack call to the original CloudFormation template passing in the new image via the ImageTag parameter which is the new tag generated for the Docker image created as part of this script.

CloudFormation seeks to run the minimum necessary changes to the infrastructure based on the stack update. In this case, I'm only providing a new image tag but this results in creating a new ECS task definition for the service. In your CloudFormation events console, you'll see a message similar to the one below:

AWS::ECS::TaskDefinition Requested update requires the creation of a new physical resource; hence creating one.

As I mentioned in part 1 of this series, I defined a DeploymentConfiguration type with a MinimumHealthyPercent property of "0" since I'm only using one EC2 instance as running through the earlier stages of the pipeline. This means the application experiences a few seconds of downtime during the update. Like most applications/services these days, if I need to continual uptime, I'd increase the number of instances in my Auto Scaling Group and increase the MinimumHealthyPercent property.

Other Stages

In the example I provided, I stop at the Build stage. If you were to take this to production, you might include other stages as well. Perhaps you might have a “Staging” stage in which you might include actions to deploy the application to the ECS containers using a production-like configuration which might include more instances in the Auto Scaling Group.

Once Staging is complete, the pipeline would automatically transition to the Production stage where it might make Lambda calls to test the application running in ECS containers. If everything looks ok, it switches the Route 53 hosted zone endpoint to the new container.

Launch the ECS Stack and Pipeline

In this section, you'll launch the CloudFormation stack that creates the ECS and Pipeline resources.


You need to have already created an ECR repository and a CodeCommit repository to successfully launch this stack. For instructions on creating an ECR repository, see part 1 of this series (or to directly launch the CloudFormation stack to create this ECR repository, click this button: .) For creating a CodeCommit repository, you can either see part 1 or use the instructions described at: Create and Connect to an AWS CodeCommit Repository.

Launch the Stack

Click the button below to launch a CloudFormation stack that provisions the ECS environment, including all the resources previously described such as CodePipeline, ECS Cluster, ECS Task Definition, ECS Service, ELB, VPC resources, IAM Roles, etc.

You'll enter values for the following parameters: RepositoryName, YourIP, KeyName, and ECSRepoName.

To launch the same stack from your AWS CLI, type the following (while modifying the same parameter values described above):

aws cloudformation create-stack --stack-name ecs-stack-1648 --template-url https://s3.amazonaws.com/stelligent-training-public/public/codepipeline/ecs-pipeline.json --region us-east-1 --disable-rollback --capabilities="CAPABILITY_IAM" --parameters ParameterKey=RepositoryName,ParameterValue=YOURCCREPO ParameterKey=RepositoryBranch,ParameterValue=master ParameterKey=KeyName,ParameterValue=YOUREC2KEYPAIR ParameterKey=YourIP,ParameterValue=YOURIP/32 ParameterKey=ECSRepoName,ParameterValue=YOURECRREPO ParameterKey=ECSCFNURL,ParameterValue=NOURL ParameterKey=AppName,ParameterValue=app-name-1648


Once the CloudFormation stack successfully launches, there are several outputs. The two most relevant are AppURL and CodePipelineURL. You can click on the AppURL value to launch the PHP application running on ECS from the ELB endpoint. The CodePipelineURL output value launches the generated pipeline from the CodePipeline console. See the screenshot below.


Access the Application

Once the stack successfully completes, go to the Outputs tab for the CloudFormation stack and click on the AppURL value to launch the application.


Commit Changes to CodeCommit

Make some visual changes to the code and commit these changes to your CodeCommit repository to see these changes get deployed through your pipeline. You perform these actions from the directory where you cloned a local version of your CodeCommit repo (in the directory created by your git clone command). Some example command-line operations are shown below.

git commit -am "change color to pink"
git push

Once these changes have been committed, CodePipeline will discover the changes made to your CodeCommit repo and initiate a new pipeline. After the pipeline is successfully completed, follow the same instructions for launching the application from your browser.



In this series, you learned how to use CloudFormation to fully automate the provisioning of the Elastic Container Service along with a CodePipeline pipeline that uses CodeCommit as its version-control repository so that whenever a change is made to the Git repo, the changes are automatically applied to a PHP application hosted on ECS images.

By modeling your pipeline in CodePipeline you can apply even more stages and actions as part of your Continuous Delivery process so that it runs through all the tests and other checks enabling you to deliver changes to the production whenever there’s a business need to do so.

Sample Code

The code for the examples demonstrated in this post are located here. Let us know if you have any comments or questions @stelligent or @paulduvall.

Now it's easier than ever to get started with MongoDB, the database that allows startups and enterprises alike to rapidly build planet-scale apps. Introducing MongoDB Atlas, the official hosted service for the database on AWS. Try it now! Brought to you in partnership with MongoDB.


Published at DZone with permission of Paul Duvall, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}