Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Automated Canary Releasing With Vamp and Jenkins on DC/OS: Part 1

DZone's Guide to

Automated Canary Releasing With Vamp and Jenkins on DC/OS: Part 1

Learn about canary releases, a method that reduces the risk of introducing new versions of an application into an existing environment, in a Jenkins pipeline.

· DevOps Zone
Free Resource

Learn more about how CareerBuilder was able to resolve customer issues 5x faster by using Scalyr, the fastest log management tool on the market. 

Image titleCanary releasing is a powerful deployment method that reduces the risk of introducing new versions of an application or service into an existing environment by slowly exposing it to a limited group of users.

In essence, a canary release follows three steps:

  • Deploy a new version.
  • Syphon off a specific portion of traffic and direct to this new version.
  • Establish correctness of the new version and redirect all traffic.

In this series of articles, we will dive into all the steps necessary to get started with canary releasing in a typical Jenkins CI/CD pipeline:

  1. Set up a Jenkins instance on DC/OS to talk to Vamp.
  2. Whip up a simple test application and push it to Github.
  3. Wrap your apps in a Docker container
  4. Create a Vamp "base" blueprint.
  5. Build a script to call the correct Vamp REST endpoints.
  6. Run build jobs to iterate on our app and release them "à la canary."
  7. Perform complex, multi-version deployments.

What Is Vamp?

Vamp is an open source, self-hosted platform for managing (micro)service-oriented architectures that rely on container technology. Vamp provides a DSL to describe services, their dependencies and required runtime environments in blueprints.

Vamp takes care of route updates, metrics collection and service discovery, so you can easily orchestrate complex deployment patterns, such as A/B testing and canary releases.

Prerequisites

To get started, you need to have a DC/OS cluster. There are many ways to go about this, but we like and use the following options:

Next up, install Vamp on DC/OS using the available Universe package or follow our installation guides for a manual setup. At the end, you should have Vamp running as a DC/OS service and you can access the Vamp UI by clicking on the link in the service overview.

Image title

Finally, install Jenkins, also available as a Universe package, and pin it to a specific host so we don't have to worry about shared storage.

Image title

Building an Application With Jenkins

For demo purposes, we will use the (extremely) simple https://github.com/magneticio/simpleservice, a one-page Node.js "app" that allows us to explore multiple deployment scenarios.

  • Fork the repo https://github.com/magneticio/simpleservice into your own account and then clone a local copy to your machine.
  • Install the Jenkins NodeJs and docker-build-step plugins from the Manage Jenkins tab. After installing, your plugin tab should have the following entries:Image titleImage title
  • Configure both plugins so the latestNode.Js version and Dockerversions are installed. The configuration tabs should look as follows:Image title
  • As we will be pushing Docker containers, add your credentials for your Docker hub repo on the Credentials tab, accessible from the dashboard. Give the credential set a descriptive ID, like “docker-hub-login.”

The next step is to create a JenkinsPipelinethat will execute all our build, test and deploy steps. Jenkins pipelines allow you to script discrete steps in the CI/CD pipeline and commit this to a Jenkinsfile and store it in your Git repo. Let’s start:

Create a new Pipeline project from the dashboard, just call it "simple service pipeline."

Set the Pipeline Definition to "Pipeline script" and paste in the following Groovy script. Be careful to replace the variables gitRepodockerHubdockerHubCredsdockerRepo and dockerImageName with your own settings.

#!groovy
node {
    def nodeHome = tool name: '8.3.0', type: 'jenkins.plugins.nodejs.tools.NodeJSInstallation'
    env.PATH = "${nodeHome}/bin:${env.PATH}"

    // !! Replace these with your own settings !!
    def gitRepo = 'https://github.com/magneticio/simpleservice/'
    def dockerHub = 'https://registry.hub.docker.com'
    def dockerHubCreds = 'docker-hub-login'
    def dockerRepo = 'magneticio'
    def dockerImageName = 'simpleservice'

    def appVersion
    def dockerImage

    currentBuild.result = "SUCCESS"

    try {
        stage('Checkout'){
            git url: gitRepo, branch: 'master'
            appVersion = sh (
                script: 'node -p -e "require(\'./package.json\').version"', 
                returnStdout: true
            ).trim()
        }
        stage('Install') {
            sh 'npm install'
        }
        stage('Test') {
            sh 'npm test'
        }
        stage('Build Docker image') {
            dockerImage = docker.build("${dockerRepo}/${dockerImageName}")
        }
        stage('Push Docker image') {
            docker.withRegistry(dockerHub, dockerHubCreds) {
                dockerImage.push(appVersion)
                dockerImage.push("latest")
            }
        }
        stage('Cleanup') {
            sh 'rm node_modules -rf'
        }
    }
    catch (err) {
        currentBuild.result = "FAILURE"
        throw err
    }
}

https://gist.github.com/tnolet/3f4a3cfe82d883980e787746ba51045d#file-jenkins_pipeline1-groovy

The above script defines all the stages in our pipeline, from checkout, via install and test, to building & pushing the resulting Docker image and performing some cleanup.

Important to notice here is that we tag our Docker image based on the value of version in the package.json .We store that value in appVersion and use it to push our Docker image. This will become important later when we need to devise a versioning strategy for canary releasing new versions onto existing versions.

You should now be all set to trigger a first build by clicking the Build Now button. If everything your pipeline's dashboard will light up green and the console output of the build job should be similar to the shortened version that is listed below.

Image title

[Pipeline] {
[Pipeline] tool
Unpacking https://nodejs.org/dist/v8.3.0/node-v8.3.0-linux-x64.tar.gz
[Pipeline] { (Checkout)
[Pipeline] git
Cloning the remote Git repository
...
[Pipeline] { (Install)
+ npm install
added 256 packages in 9.27s
...
[Pipeline] { (Test)
+ npm test
1 tests complete
Test duration: 290 ms
...
[Pipeline] { (Build Docker image)
+ docker build -t magneticio/simpleservice .
Successfully built d602da61bbe7
...
[Pipeline] { (Push Docker image)
+ docker push registry.hub.docker.com/magneticio/simpleservice:e899d4
+ docker push registry.hub.docker.com/magneticio/simpleservice:latest
[Pipeline] { (Cleanup)
+ rm node_modules -rf
[Pipeline] End of Pipeline
Finished: SUCCESS

Deploying to Vamp

Great! We are now building containers based on our source code and pushing them to Docker hub. Let's start deploying our freshly baked Docker container to our Vamp instance. But before we continue, some observations about our build pipeline:

With that out of the way, we need to get Jenkins talking to Vamp, for this we need to get the Vamp service endpoint in the DC/OS network. Grab it from the service configuration tab. In my case, this is 10.20.0.100:8080.

Image title

Using this endpoint, we can talk to Vamp inside the DC/OS cluster, without having to provide any credentials. That is exactly what we are doing in this additional stage to our pipeline script.

def vampDeploymentName = 'simpleservice'
def vampHost = 'http://10.0.1.134:3232'

stage('Deploy') {
    sh "curl -X POST --data-binary @vamp_blueprint.yml ${vampHost}/api/v1/blueprints -H 'Content-Type: application/x-yaml'"
    sh "curl -X PUT --data-binary @vamp_blueprint.yml ${vampHost}/api/v1/deployments/${vampDeploymentName} -H 'Content-Type: application/x-yaml'"
}   

https://gist.github.com/tnolet/8795f5e3ce86c1a3fb4275259650a96b#file-jenkins_pipeline_delta1-2-groovy

  1. Sets the vampHost and vampDeploymentNamevariables.
  2. Adds the Deploy stage to the set of stages.
  3. POST the vamp_blueprint.ymlblueprint file from our repo to Vamp. This blueprint is the initial starting point for our application. Read more about what blueprints are an how they work.
  4. It creates a deployment based on this blueprint using a PUT

Update the pipeline script with the new stage (here is a link the full version of the new pipeline script) and run the build again. You should find a newly added blueprint and deployment in your Vamp UI.

Also, you should find an additional stage added to the Stage view dashboard, giving you a nice, semi-real-time, overview of all stages in the full CI/CD pipeline.

Image title

In the Vamp UI, go to the simpleservice gatewaymarkedas"external" (Gateways → simpleservice/simpleservice/web) and click on the host/port link. This should pull out a sidebox that shows the output of port 3000 of our app: a single, blue HTML page with a short message and the current version number.

Image title

Wrap Up

Congratulations! You've worked through all of the tedious installation and setup stuff and are now ready to really start fleshing out this CI/CD pipeline and explore some of the rather cool functions Vamp offers in conjunction with DC/OS and Jenkins in part 2 of this series, coming soon. Proceed to part 2

Find out more about how Scalyr built a proprietary database that does not use text indexing for their log management tool.

Topics:
continuous delivery ,continuous integration ,jenkins ,devops ,automation ,ci/cd ,deployment

Published at DZone with permission of Tim Nolet. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}