Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Refactoring CD Pipelines – Part 2: Metadata-Driven Pipeline

DZone's Guide to

Refactoring CD Pipelines – Part 2: Metadata-Driven Pipeline

In this post, Jeff Dugas will continue refactoring deploy scripts to work on getting them setup using a common pipeline.

· DevOps Zone
Free Resource

The DevOps Zone is brought to you in partnership with Sonatype Nexus. The Nexus Suite helps scale your DevOps delivery with continuous component intelligence integrated into development tools, including Eclipse, IntelliJ, Jenkins, Bamboo, SonarQube and more. Schedule a demo today

In the last post, we created two basic applications each with a basic shell script to automate deploying them into AWS. In this post, we will continue refactoring those deploy scripts to work on getting them setup using a common pipeline. We are aiming to have the pipeline executable code configured through metadata allowing us to customize the pipeline through configuration. Although we are not using a build server, one could easily be used to orchestrate the pipeline with the framework that we create here.

Our previous deploy script focused on deploying the application to AWS. If we look at the scripts from each repository or pipeline's folder side-by-side we notice that they are almost identical. This seems like a good place to practice some code reuse. Let’s build out a pipeline that can allow us to share common code across the two applications. Once we complete this pipeline, we will have common code that is flexible enough to be used across multiple applications.

We start by defining the steps of a pipeline from the existing deploy scripts. By reading the scripts, we can identify that we get the code and gather variables, run some tests, create an AWS Cloud Formation (CFN) stack, and run a simple test against each deployed application.

Pipeline Step App: blog_refactor_php App: blog_refactor_nodejs
SCM Polling Variables, checkout code… Variables, checkout code…
Static Analysis foodcritic() foodcritic(), jslint()
App Prerequisites AWS Relational Database Service (RDS) creation, Chef runlist/attributes upload Chef runlist/attributes upload
App Deployment AWS Auto Scaling Group (ASG) creation/app deployment ASG creation/app deployment
Acceptance Testing curl endpoint – expect 200 curl endpoint – expect 200

Logical grouping; note the practical differences between the pipelines

Now that we have the steps laid out, we need to decide on a technology to implement this pipeline.

(Rake + Ruby) > Bash

We could continue to use bash for our pipeline code by adding some structure rather than having a flat script. Even though extracting the steps we have identified into function gains us some code reuse, we are still lacking features. By switching to a more advanced language, we will gain library support that we can leverage to avoid reinventing the wheel. Ruby and Rake seem like a good combination to build the pipeline since they seem to fulfill all of these requirements.

Rake is a well-established build tool that leverages the power of Ruby as a dynamic language. Besides defining tasks with prerequisites and parallel task execution, it offers us the ability to define tasks dynamically. Rake is task-oriented, which mirrors our pipeline “steps” idea pretty well. We can also get some flexibility out of Rake with the ability to run tasks directly from the command line or integrate the rake tasks into a CI/CD system. Since Rake is just Ruby anyway, integrating any classes we create into the tasks should be pretty simple as well.

To maximize code reuse in an easy, repeatable way you could create a Ruby gem to house common code. This is the approach we took, using metadata to dynamically define Rake tasks and wiring those tasks to reusable classes in a Ruby gem.

Not Your Parent’s Rakefile

Our approach uses Rake primarily as the connective tissue between a hypothetical CI/CD server and the underlying Ruby code that executes the pipeline step logic.

Normally, Rake tasks will be defined along with the code they execute, either in a Rakefile housing multiple tasks or split into separate.rake files. For our sample applications to leverage the pipeline gem, we also use a Rakefile, but its job is mostly to read the application’s pipeline metadata and convey it to the gem’s Rakefile.

require 'yaml'
require 'blog_refactor_gem'

# Read in the application's pipeline metadata
meta_path = File.join(File.expand_path(File.dirname(__FILE__)), 'pipeline-meta.yml')
@meta = YAML.load(IO.read(meta_path))

# Define where our pipeline will keep its parameter store,
#   responsible for passing parameters from task to task during the execution of the pipeline.
@store_path = File.join(File.expand_path(File.dirname(__FILE__)), 'store.json')

# Invoke the gem's Rakefile. It will dynamically derive tasks from the metadata
require 'blog_refactor_rake'

The gem’s Rakefile iterates over the steps array in the pipeline metadata, defining one Rake task per pipeline step. Each Rake task’s pipeline functionality is delegated to a dynamically instantiated Ruby class (the ‘worker’ class in the code snippet below) assigned to that step in the metadata.

require 'yaml'
require 'blog_refactor_gem'

# Read in the application's pipeline metadata
meta_path = File.join(File.expand_path(File.dirname(__FILE__)), 'pipeline-meta.yml')
@meta = YAML.load(IO.read(meta_path))

# Define where our pipeline will keep its parameter store,
#   responsible for passing parameters from task to task during the execution of the pipeline.
@store_path = File.join(File.expand_path(File.dirname(__FILE__)), 'store.json')

# Invoke the gem's Rakefile. It will dynamically derive tasks from the metadata
require 'blog_refactor_rake'

The @store variable is an instance of a parameter-store class; substitute with any parameter or credentials store in your implementations. Injected into each worker class, the store instance gives the worker access to any outputs from previous Rake tasks, as well as the ability to create outputs for downstream Rake tasks.

screen-shot-2016-09-21-at-6-54-34-pm

Figure 1: an application’s pipeline metadata becomes Rake tasks

The steps are just Ruby classes; your team codes them to match what your pipelines need to do. Similarly, your team should code the store class to match your team’s needs. Because of this, what we’re showing here is more in line with a framework to help your team maximize code reuse.

That seems like a sweet piece of tech, but what do we do when one of our applications has pipeline needs that don’t align perfectly with our gem’s worker class capabilities?

Down With Conformity!

For our post, we have refactored both our PHP and NodeJS applications to leverage the pipeline gem. While most of the pipeline gem’s worker classes are sufficient to support each application’s CD pipeline, our framework needed to be flexible enough to extend a worker class as well as to support fully-custom steps.

rake-task-generation

Figure 2: worker classes can come from your application pipeline (left) or the pipeline gem (center) in order to support the corresponding Rake task (right).

Extending Standard Steps via Worker-Class Inheritance

The gem defines a class that performs some static code analysis as part of the CD pipeline, namely running foodcritic against the application’s pipeline cookbook. To support linting for NodeJS application, we can follow these steps so that our NodeJSApp application can provide its own pipeline customization.

First, we create a Ruby class (we called it `ExtendedStaticAnalysis`) in the NodeJS application’s pipeline/lib folder that inherits from the gem’s StaticAnalysis class. This gives us access to execute the foodcritic tests provided by the base worker class.

module Build
  module Commit
    class ExtendedStaticAnalysis < StaticAnalysis
      def initialize(store:)
        # execute base class logic first
        super(store: store)
        # execute custom logic
        execute_jslint(working_directory: store.get(attrib_name: "params")[:working_directory])
      end
    end
  end
end

Next, we add a method to ExtendedStaticAnalysis that performs the jslint analysis.

def execute_jslint(working_directory:)
  Dir.chdir(working_directory) do
    puts "Running jslint on #{working_directory}..."
    results = `find . -name "*.js" -print0 | xargs -0 jslint`
    puts results
  end
end

Finally, we change our application’s pipeline metadata so that it will instantiate this new class instead of the gem’s standard worker to perform that step. If we then run `rake –tasks` to show the steps our pipeline now supports, we’ll see a `build:commit:extended_static_analysis` task in the list! (See Figure 1.)

Adding Custom Steps

As with extending step classes, your application’s pipeline can implement its own steps instead of using one of the built-in step implementations. If there’s a serious mismatch between what your application’s pipeline needs to do and what the gem provides, we can create new worker classes in the gem to support a whole new category of pipeline steps.

That’s a Wrap

We now have a gem that can be reused and extended for our application pipelines and more. By encapsulating the common logic into Ruby classes in the gem, we’ve eliminated the repetitious code (and the temptation to copy and paste!) without taking away the flexibility to have custom logic in specific pipeline steps. When we expand our suite of applications, we rely on metadata and a small amount of custom code where necessary. Instead of needing to execute Rake tasks from a wrapper script or by hand (great for step development) you can integrate this with your CI or CD server.

See these GitHub repositories referenced by the article:

The DevOps Zone is brought to you in partnership with Sonatype Nexus. Use the Nexus Suite to automate your software supply chain and ensure you're using the highest quality open source components at every step of the development lifecycle. Get Nexus today

Topics:
devops ,pipelines ,cd ,refactoring

Published at DZone with permission of Jeff Dugas, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}