Over a million developers have joined DZone.

Refactoring CD Pipelines – Part 1: Chef-Solo in AWS AutoScaling Groups

Going through a refactor of a CloudFormation template that's using Chef and AWS.

· DevOps Zone

The DevOps zone is brought to you in partnership with Sonatype Nexus. The Nexus suite helps scale your DevOps delivery with continuous component intelligence integrated into development tools, including Eclipse, IntelliJ, Jenkins, Bamboo, SonarQube and more. Schedule a demo today

We often overlook similarities between new CD Pipeline code we’re writing today and code we’ve already written. In addition, we might sometimes rely on the ‘copy, paste, then modify’ approach to make quick work of CD Pipelines supporting similar application architectures. Despite the short-term gains, what often results is code sprawl and maintenance headaches. This blog post series is aimed at helping to reduce that sprawl and improve code re-use across CD Pipelines.

This mini-series references two real but simple applications, hosted in GitHub, representing the kind of organic growth phenomenon we often encounter and sometimes even cause, despite our best intentions. We’ll refactor them each a couple of times over the course of this series, with this post covering CloudFormation template reuse.

Chef’s a Great Tool… Let’s Misuse It!

During the deployment of an AWS AutoScaling Group, we typically rely on the user data section of the Launch Configuration to configure new instances in an automated, repeatable way. A common automation practice is to use chef-solo to accomplish this. We carefully chose chef-solo as a great tool for immutable infrastructure approaches. Both applications have CD Pipelines that leverage it as a scale-time configuration tool by reading a JSON document describing the actions and attributes to be applied.

It’s All Roses

It’s a great approach: we sprinkle in a handful or two of CloudFormation parameters to support our Launch Configuration, embed the chef-solo JSON in the user data and decorate it with references to the CloudFormation parameters. Voila, we’re done! The implementation hardly took any time (probably less than an hour per application if you could find good examples in the internet), and each time we need a new CD Pipeline, we can just stamp out a new CloudFormation template.

Figure 1: Launch Configuration User Data (As Plain Text)

apt-get update
apt-get install git -y
git clone git://github.com/stelligent/blog_refactor_php /opt/greeter
if [ ":" == ":$(dpkg -l | grep chefdk)" ]; then
  pushd /tmp
  wget --quiet https://packages.chef.io/stable/ubuntu/12.04/chefdk_0.16.28-1_amd64.deb
  dpkg -i chefdk_0.16.28-1_amd64.deb
  mkdir /tmp/cookbooks
pushd /opt/greeter/pipelines/cookbooks/greeter
berks vendor /tmp/cookbooks
cat > /tmp/chef.json <<CHEFJSON
  "run_list": [
  "greeter": {
    "db_url": "%{DbUrl}",
    "db_name": "%{DbName}",
    "username": "%{DbUsername}",
    "password": "%{DbPassword}",
    "docroot": "%{DocRoot}",
    "server_name": "%{ServerName}"
cat > /tmp/solo.rb <<SOLO
       cookbook_path ['/tmp/cookbooks', '/opt/greeter/pipelines/cookbooks']
chef-solo -c /tmp/solo.rb -j /tmp/chef.json

Figure 2: CloudFormation Parameters (Corresponding to Figure 1)

  "DbPassword": {
      "Type": "String",
      "Description": "Password for the database",
      "NoEcho": true
    "DbUsername": {
      "Type": "String",
      "Description": "Username for the database"
    "DbUrl": {
      "Type": "String",
      "Description": "URL to reach the database"
    "DbName": {
      "Type": "String",
      "Description": "Database name the application should use"
    "DocRoot": {
      "Type": "String",
      "Description": "Path used for the root of the web application"
    "ServerName": {
      "Type": "String",
      "Description": "application server name used in apache config",
      "Default": "localhost"

Well, it’s Mostly Roses…

Why is it, then, that a few months and a dozen or so CD Pipelines later, we’re spending all our time debugging and doing maintenance on what should be minor tweaks to our application configurations? New configuration parameters take hours of trial and error, and new application pipelines can be copied and pasted into place, but even then it takes hours to scrape out the previous application’s specific needs from its CloudFormation template and replace them.

Fine, It’s Got Thorns, and They’re Slowing Us Down

Maybe our great solution could have been better? Let’s start with the major pitfall to our original approach: each application we support has its own highly-customized CloudFormation template.

  • Lots of application-specific CFN parameters exist solely to shuttle values to the chef-solo JSON
  • Fairly convoluted user data, containing an embedded JSON structure and parameter references, is a bear to maintain
  • Tracing parameter values from the CD Pipeline, traversing the CFN parameters into the user data… that’ll take some time to debug when it goes awry

One Path to Code Reuse

Since we’re referencing two real GitHub application repositories that demonstrate our current predicament, we’ll continue using those repositories to present our solution via a code branch named Phase1 in each repository. At this point, we know our applications share enough of a common infrastructure approach that they should be sharing that part of the CloudFormation template.

The first part of our solution will be to extract the ‘differences’ from the CloudFormation templates between these two application pipelines. That should leave us with a common skeleton to work with, minus all the Chef specific items and user data, which will allow us to push the CFN template into an S3 bucket to be shared by both application CD pipelines.

The second part will be to add back the required application specificity, but in a way that migrates those differences from the CloudFormation templates to external artifacts stored in S3.

Taking it Apart

Our first tangible goal is to make the user data generic enough to support both applications. We start by moving the inline chef-solo JSON to its own plain JSON document in each application’s pipeline folder (/pipelines/config/app-config.json). Later, we’ll modify our CD pipelines so they can make application and deployment-specific versions of that file and upload it to an S3 bucket.

Figure 3: Before/after comparison (diff) of our Launch Configuration User Data

Screen Shot 2016-08-30 at 4.41.42 PM
Left: original user data; Right: updated user data

The second goal is to make a single, vanilla CloudFormation template. Since we orphaned these Chef only CloudFormation parameters by removing the parts of the user data referencing them, we can remove them. The resulting template’s focus can now be on meeting the infrastructure concerns of our applications.

Figure 4: Before/after comparison (diff) of the CloudFormation parameters required

Screen Shot 2016-08-31 at 6.39.47 PM
Left: original CFN parameters; Right: pared-down parameters

At this point, we have eliminated all the differences between the CloudFormation templates, but now they can’t configure our application! Let’s fix that.

Reassembling it for Reuse

Our objective now is to make our Launch Configuration user data truly generic so that we can actually reuse our CloudFormation template across both applications. We do that by scripting it to download the JSON that Chef needs from a specified S3 bucket. At the same time, we enhance the CD Pipelines by scripting them to create application and deploy-specific JSON, and to push that JSON to our S3 bucket.

Figure 5: Chef JSON stored as a deploy-specific object in S3

Screen Shot 2016-08-30 at 4.43.50 PM
The S3 key is unique to the deployment.

To stitch these things together we add back one CloudFormation parameter, ChefJsonKey, required by both CD Pipelines – its value at execution time will be the S3 key where the Chef JSON will be downloaded from. (Since our CD Pipeline has created that file, it’s primed to provide that parameter value when it executes the CloudFormation stack.)

Two small details left. First, we give our AutoScaling Group instances the ability to download from that S3 bucket. Now that we’re convinced our CloudFormation template is as generic as it needs to be, we upload it to S3 and have our CD Pipelines reference it as an S3 URL.

Figure 6: Our S3 bucket structure ‘replaces’ the /pipeline/config folder 

Screen Shot 2016-08-30 at 4.43.36 PM
The templates can be maintained in GitHub.

That’s a Wrap

We now have a vanilla CloudFormation template that supports both applications. When an AutoScaling group scales up, the new servers will now download a Chef JSON document from S3 in order to execute chef-solo. We were able to eliminate that template from both application pipelines and still get all the benefits of Chef based server configuration.

See these GitHub repositories referenced throughout the article:

In Part 2 of this series, we’ll continue our refactoring effort with a focus on the CD Pipeline code itself.

The DevOps zone is brought to you in partnership with Sonatype Nexus. Use the Nexus Suite to automate your software supply chain and ensure you're using the highest quality open source components at every step of the development lifecycle. Get Nexus today

chef,aws,continuous delivery

Published at DZone with permission of Jeff Dugas, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}