Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Creating Jenkins Configuration as Code and Applying Changes Without Downtime

DZone's Guide to

Creating Jenkins Configuration as Code and Applying Changes Without Downtime

This tutorial shows how anything in Jenkins can be configured as code so changes can be applied right in a Jenkins job with no downtime.

· DevOps Zone ·
Free Resource

Learn more about how CareerBuilder was able to resolve customer issues 5x faster by using Scalyr, the fastest log management tool on the market. 

This blog post demonstrates how anything in Jenkins could be configured as a code through a Java API using Groovy code, and how changes could be applied right inside a Jenkins job. I particularly will demo how to configure a Kubernetes plugin and credentials, but the same concept could be used later to configure any Jenkins plugin you are interested in. We will also look at how to create a custom config, which could be used either for all
or only specific Jenkins instances so you can set up different instances differently based on security policies or any other criteria.

The Why

Recently, I have been working on a task to improve deployment of our master Jenkins instances on Kubernetes.

One of the requirements was to improve the speed, as we have more than 40 Jenkins masters running on different
environments, like test, dev, pre-prod, perf, and prod, and deployed in Kubernetes over an AWS cluster. The deployment job took around an hour and involved downtime and required multiple steps.

The process of delivering a new version of Jenkins was as below:

  1. Raising a PR including various plugin config changes included in the Docker image, like adding a new slave into Kubernetes cluster, updating slave image version, increasing container cap, adding new environment variable or new vault secret,
    or adding new credentials, updating a script approval with a new method, etc.
  2. After thorough code review by fellow DevOps engineers, PR would be merged and nightly deployment scheduled.
  3. Finally, deployment job would deploy all masters included in the config file with the latest image.

Issues With This Approach

One of the issues was not considering that active jobs could be running by the time of deployment on the instance that is being updated, and the given update was implying deleting a Kubernetes deployment of the master and deploying a new image, all jobs were simply lost on that instance. That was particularly disrupting performance environment and some long-running end-to-end test jobs as they would normally run overnight. Let alone daily deployment, it would simply be impossible given hundreds of slaves running builds all day long.

The deployment job was also running sequentially, so if deployment failed, you would only find this by the end of the build from the generated report and then rerun
deployment again for this very failed instance.

But even after a deployment job has been improved, deploying instances in parallel - adding a check to wait until there are no active jobs running on an instance, running health check, retry, etc. - the main issue was still there: you had to go through a long process which required a new image for every subtle change and downtime was inevitable, plus you couldn't update Jenkins promptly.

This, in its turn, ended up with updating Jenkins for urgent changes manually through the UI and meant config changes now in the Docker image and the active instance could easily diverge or conflict.

The What

It seemed a different approach was required to bring the next features:

  1. Being able to update any Jenkins master or slave immediately - no new image, no redeploy, no downtime.
  2. No manual changes through the UI - everything is kept as code.
  3. Jenkins's current state and the state of the image and config are kept in sync.
  4. Any change could be tested immediately without the vicious cycle: create a new image, deploy, test, and if it fails - repeat!
  5. Creating a configuration that could be applied to specific environments only (prod vs test/dev Jenkins), with the inheritance of common and custom config per Jenkins config.

If you have never Dockerized or run Jenkins as a container before, or you have never administered Jenkins in general, it is worth reading my blog posts dedicated to Jenkins:

Part 1 shows how to Dockerize Jenkins and its administration and configuration in detail, so you will be more prepared for this post, as it requires some familiarity with running Jenkins inside a container and its administration.

The How: Enter Config as Code

Jenkins and its plugins are written in Java, meaning any public API is accessible for modification, but because Jenkins is a decade-old product, its architecture is designed for the times when applications were configured mainly through the UI, yet any modification is reflected in internal XML config for persistence. So when you Dockerize Jenkins, you normally create config through reverse engineering, add the change through the UI, and then bake generated XML into your Docker image. Even though it may seem easy, for the reasons provided above, it is not the best practice and requires many steps and downtime.

So let's see what steps we need to achieve our goal of fully automated, programmatic, no-downtime updates:

  1. Familiarise ourselves with a Java API of the plugin we are interested in.
  2. Write Groovy code to interact with the Java API of ScriptApproval, Kubernetes, and Credentials plugins.
  3. Create flexible common config which could be applied to all or specific instances with minimal duplication.
  4. Create a way to checkout externalized source code and config and apply it during container instantiation.
  5. Apply source code or config changes when required to Jenkins with a Jenkins job.

I will start with a very simple plugin called ScriptApproval, just so you see how the concept of configuring plugins works, then we will look at a more advanced Kubernetes config, and finally, to see how to segregate config per Jenkins instance, we will look at how to configure the Credentials plugin.

Configuring the ScriptApproval Plugin

So given I have the config:

scriptApproval{
    approvedSignatures=[
    'method groovy.util.ConfigSlurper parse java.lang.String',
    'staticMethod java.lang.System getenv',
    'method org.jenkinsci.plugins.scriptsecurity.scripts.ScriptApproval approveSignature java.lang.String',
    'staticMethod org.jenkinsci.plugins.scriptsecurity.scripts.ScriptApproval get'
    ]
}

I can run the code in Script Console:

import org.jenkinsci.plugins.scriptsecurity.scripts.*

ScriptApproval script = ScriptApproval.get()

ConfigObject conf = new ConfigSlurper().parse(new File(System.getenv("JENKINS_HOME") + '/jenkins_config/scriptApproval.txt').text)

conf.scriptApproval.approvedSignatures.each{ approvedSignature ->
  println("checking for new signature ${approvedSignature}")

  def found = script.approvedSignatures.find { it == approvedSignature }

  if (!found){
    println("Approving signature ${approvedSignature}")
    script.approveSignature(approvedSignature)
  }

}

And should see next screen:

We use Groovy ConfigSlurper, which reads the config file from given path and returns a map which we can navigate to retrieve what we need.

Then we make API calls to setup ScriptApproval plugin and the API could be found here. This way, you can configure any plugin you want, now let's move on.

1. Familiarize Ourselves With the Java API of Plugin We Are Interested In

The first thing is to make sure we pick up the right branch, as different versions could be non-backward compatible.

Let's say I am running some instance of Jenkins and want to find the version of the plugin:

➜  ~ curl -s  -u kayan:kayan "http://localhost:8080/pluginManager/api/json?depth=1" |\
 jq -r '.plugins[] | select (.shortName == "kubernetes") | .version'

0.11

Now we need to clone the code from GitHub to study its API and how it works:

git clone https://github.com/jenkinsci/kubernetes-plugin

As our current version of the installed plugin is 0.11, we need to checkout that very version so our interaction will be in line with the installed plugin, but first I need to find it:

git tag -l | grep 11

kubernetes-0.11

And then:

git checkout kubernetes-0.11

Note: checking out 'kubernetes-0.11'.

If I do some stats on the code:

➜  kubernetes-plugin git:(c8e3642)stats src java             
calculating...
total lines: 5707

It may first seem too crazy to go through 5 thousand lines of Java code, but in fact, all we need is to find out how the main components of the plugin are created so we can mimic that in our Groovy script.

Now, if you look at the plugin config in Jenkins:

You can find out that all we need is to find constructors and getter/setter of three components:
KubernetesCloud, PodTemplate, ContainerTemplate, and some other properties of those three.

Once we check the API, we can then set any Object and properties, and check after running our Groovy script if it looks the same as in the original config set up from the UI.

2. Write Some Groovy Code to Interact With That Java API***

So let's see how our kubernetes.groovy will look:

import hudson.model.*
import jenkins.model.*
import org.csanchez.jenkins.plugins.kubernetes.*
import org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume
import org.csanchez.jenkins.plugins.kubernetes.volumes.HostPathVolume

//since kubernetes-1.0
//import org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar
import org.csanchez.jenkins.plugins.kubernetes.PodEnvVar

//change after testing
ConfigObject conf = new ConfigSlurper().parse(new File(System.getenv("JENKINS_HOME") + '/jenkins_config/kubernetes.txt').text)

def kc
try {
    println("Configuring k8s")


    if (Jenkins.instance.clouds) {
        kc = Jenkins.instance.clouds.get(0)
        println "cloud found: ${Jenkins.instance.clouds}"
    } else {
        kc = new KubernetesCloud(conf.kubernetes.name)
        Jenkins.instance.clouds.add(kc)
        println "cloud added: ${Jenkins.instance.clouds}"
    }

    kc.setContainerCapStr(conf.kubernetes.containerCapStr)
    kc.setServerUrl(conf.kubernetes.serverUrl)
    kc.setSkipTlsVerify(conf.kubernetes.skipTlsVerify)
    kc.setNamespace(conf.kubernetes.namespace)
    kc.setJenkinsUrl(conf.kubernetes.jenkinsUrl)
    kc.setCredentialsId(conf.kubernetes.credentialsId)
    kc.setRetentionTimeout(conf.kubernetes.retentionTimeout)
    //since kubernetes-1.0
//    kc.setConnectTimeout(conf.kubernetes.connectTimeout)
    kc.setReadTimeout(conf.kubernetes.readTimeout)
    //since kubernetes-1.0
//    kc.setMaxRequestsPerHostStr(conf.kubernetes.maxRequestsPerHostStr)

    println "set templates"
    kc.templates.clear()

    conf.kubernetes.podTemplates.each { podTemplateConfig ->

        def podTemplate = new PodTemplate()
        podTemplate.setLabel(podTemplateConfig.label)
        podTemplate.setName(podTemplateConfig.name)

        if (podTemplateConfig.inheritFrom) podTemplate.setInheritFrom(podTemplateConfig.inheritFrom)
        if (podTemplateConfig.slaveConnectTimeout) podTemplate.setSlaveConnectTimeout(podTemplateConfig.slaveConnectTimeout)
        if (podTemplateConfig.idleMinutes) podTemplate.setIdleMinutes(podTemplateConfig.idleMinutes)
        if (podTemplateConfig.nodeSelector) podTemplate.setNodeSelector(podTemplateConfig.nodeSelector)
        //
        //since kubernetes-1.0
//        if (podTemplateConfig.nodeUsageMode) podTemplate.setNodeUsageMode(podTemplateConfig.nodeUsageMode)
        if (podTemplateConfig.customWorkspaceVolumeEnabled) podTemplate.setCustomWorkspaceVolumeEnabled(podTemplateConfig.customWorkspaceVolumeEnabled)

        if (podTemplateConfig.workspaceVolume) {
            if (podTemplateConfig.workspaceVolume.type == 'EmptyDirWorkspaceVolume') {
                podTemplate.setWorkspaceVolume(new EmptyDirWorkspaceVolume(podTemplateConfig.workspaceVolume.memory))
            }
        }

        if (podTemplateConfig.volumes) {
            def volumes = []
            podTemplateConfig.volumes.each { volume ->
                if (volume.type == 'HostPathVolume') {
                    volumes << new HostPathVolume(volume.hostPath, volume.mountPath) } } podTemplate.setVolumes(volumes) } if (podTemplateConfig.keyValueEnvVar) { def envVars = [] podTemplateConfig.keyValueEnvVar.each { keyValueEnvVar ->

                //since kubernetes-1.0
//                envVars << new KeyValueEnvVar(keyValueEnvVar.key, keyValueEnvVar.value)
                envVars << new PodEnvVar(keyValueEnvVar.key, keyValueEnvVar.value)
            }
            podTemplate.setEnvVars(envVars)
        }


        if (podTemplateConfig.containerTemplate) {
            println "containerTemplate: ${podTemplateConfig.containerTemplate}"

            ContainerTemplate ct = new ContainerTemplate(
                    podTemplateConfig.containerTemplate.name ?: conf.kubernetes.containerTemplateDefaults.name,
                    podTemplateConfig.containerTemplate.image)

            ct.setAlwaysPullImage(podTemplateConfig.containerTemplate.alwaysPullImage ?: conf.kubernetes.containerTemplateDefaults.alwaysPullImage)
            ct.setPrivileged(podTemplateConfig.containerTemplate.privileged ?: conf.kubernetes.containerTemplateDefaults.privileged)
            ct.setTtyEnabled(podTemplateConfig.containerTemplate.ttyEnabled ?: conf.kubernetes.containerTemplateDefaults.ttyEnabled)
            ct.setWorkingDir(podTemplateConfig.containerTemplate.workingDir ?: conf.kubernetes.containerTemplateDefaults.workingDir)
            ct.setArgs(podTemplateConfig.containerTemplate.args ?: conf.kubernetes.containerTemplateDefaults.args)
            ct.setResourceRequestCpu(podTemplateConfig.containerTemplate.resourceRequestCpu ?: conf.kubernetes.containerTemplateDefaults.resourceRequestCpu)
            ct.setResourceLimitCpu(podTemplateConfig.containerTemplate.resourceLimitCpu ?: conf.kubernetes.containerTemplateDefaults.resourceLimitCpu)
            ct.setResourceRequestMemory(podTemplateConfig.containerTemplate.resourceRequestMemory ?: conf.kubernetes.containerTemplateDefaults.resourceRequestMemory)
            ct.setResourceLimitMemory(podTemplateConfig.containerTemplate.resourceLimitMemory ?: conf.kubernetes.containerTemplateDefaults.resourceLimitMemory)
            ct.setCommand(podTemplateConfig.containerTemplate.command ?: conf.kubernetes.containerTemplateDefaults.command)
            podTemplate.setContainers([ct])
        }

        println "adding ${podTemplateConfig.name}"
        kc.templates << podTemplate

    }

    kc = null
    println("Configuring k8s completed")
}
finally {
    //if we don't null kc, jenkins will try to serialise k8s objects and that will fail, so we won't see actual error
    kc = null
}

And here is the Kubernetes config file for the script:


kubernetes {
    name = 'Kubernetes'
    serverUrl = 'https://kingslanding.westeros.co.uk'
    skipTlsVerify = true
    namespace = 'kingslanding'
    jenkinsUrl = 'http://kingslanding-dev-jenkins.kingslanding.svc.cluster.local'
    credentialsId = 'VALYRIAN_STEEL_SECRET'
    containerCapStr = '500'
    retentionTimeout = 5
    connectTimeout = 0
    readTimeout = 0
    podTemplatesDefaults {
        instanceCap = 2147483647
    }
    containerTemplateDefaults {
        name = 'jnlp'
        alwaysPullImage= false
        ttyEnabled= true
        privileged= true
        workingDir= '/var/jenkins_home'
        args= '${computer.jnlpmac} ${computer.name} -jar-cache /var/jenkins_home/jars'
        resourceRequestCpu = '1000m'
        resourceLimitCpu = '2000m'
        resourceRequestMemory = '1Gi'
        resourceLimitMemory = '2Gi'
        command = ''
    }
    podTemplates = [
        [
            name: 'PARENT',
            idleMinutes: 0,
            nodeSelector: 'role=jenkins',
            nodeUsageMode: 'NORMAL',
            customWorkspaceVolumeEnabled: false,
            workspaceVolume: [
                type: 'EmptyDirWorkspaceVolume',
                memory: false,
            ],
            volumes: [
                [
                    type: 'HostPathVolume',
                    mountPath: '/jenkins/.m2/repository',
                    hostPath: '/jenkins/m2'
                ]
            ],
            keyValueEnvVar: [
                [
                    key: 'VAULT_TOKEN_ARTIFACTORY',
                    value: '{{with $secret := secret "secret/jenkins/artifactory" }}{{ $secret.Data.value }}{{end}}'
                ],
                [   key: 'VAULT_ADDR',
                    value: '{{env "VAULT_ADDR"}}'
                ],
                [   key: 'CONSUL_ADDR',
                    value: '{{env "CONSUL_ADDR"}}'
                ]
            ],
            podImagePullSecret: 'my-secret'
        ],
        [
            name: 'Java',
            label: 'java_slave',
            inheritFrom: 'PARENT',
            containerTemplate : [
                image: 'registry.host.domain/jenkins-slave-java',
                alwaysPullImage: true,
                resourceRequestCpu: '1000m',
                resourceRequestMemory: '1Gi',
                resourceLimitCpu: '2000m',
                resourceLimitMemory: '2Gi'
            ]

        ],
        [
            name: 'Go',
            label: 'go_slave',
            inheritFrom: 'PARENT',
            containerTemplate : [
                image: 'registry.host.domain/jenkins-slave-go',
                alwaysPullImage: true,
                resourceRequestCpu: '1000m',
                resourceRequestMemory: '1Gi',
                resourceLimitCpu: '2000m',
                resourceLimitMemory: '8Gi'
            ],
            volumes: [
                [
                    type: 'HostPathVolume',
                    mountPath: '/var/go/go_stuff',
                    hostPath: '/go/go_stuff'
                ],
                [
                    type: 'HostPathVolume',
                    mountPath: '/var/go/some_other_go_stuff',
                    hostPath: '/go/some_other_go_stuff'
                ]
             ]

        ]

    ]
}

Kubernetes Plugin Analysis

Now, if you already have the Kubernetes plugin set up, you can create your config by reverse engineering; go through the XML file and grab the properties into the config.

One thing worth mentioning: it uses default values where possible, thus reducing the number of lines in the config, so if you always use privileged containers, you can set up that property by referring to 'conf.kubernetes.containerTemplateDefaults.privileged' when it is not explicitly configured.

You may see that I have commented some lines in kubernetes.groovy as the currently installed version of the plugin is not compatible with version 1.0, which is presumed as the next version in our example when we will update the plugin (the current latest version is much higher - this is just for reference). For example, PodEnvVar in v 0.11 has been replaced by KeyValueEnvVar since version 1.0. But, from a config perspective, they are the same, as the config is referring to an abstracted 'keyValueEnvVar' property, and when you switch to the new version, your config doesn't need to change, as all you do is change a couple lines in your code and then test your config.

Another important thing is if there is some bug in the code of the actual plugin, you can "fix" it by just changing some logic in your config groovy which will avoid you from downgrading or waiting until it is fixed (which happens a lot with Jenkins plugins, so I personally always use a couple versions for back strategy).

For example, if you check changelog, you can see that there was a bug in version 0.10, 'Fix workingDir inheritance error #136', which was fixed in 0.11 in this PR.

But the point is with our flexible config, we could have fixed that by just applying the next code:

String workingDir = Strings.isNullOrEmpty(template.getWorkingDir()) ?
 (Strings.isNullOrEmpty(parent.getWorkingDir()) ? DEFAULT_WORKING_DIR : parent.getWorkingDir()) :
 template.getWorkingDir();

right inside our config, as we have full control over how the plugin is configured!

We can also be always ahead of actual enhancements; for example, the biggest enhancement in 0.10 was allowing to inherit podTemplate from base/parent podTemplate:
'Allow nesting of templates for inheritance. #94.' But with our config, we could have done it even before this enhancement, as we have our own containerTemplateDefaults to refer to when the config is common for multiple podTemplates.

3. Create a Flexible Common Config Which Can Be Applied to All or Specific Instances With Minimum Duplication

I currently implemented a credentials plugin config to only support a simple UsernamePasswordCredentialsImpl type of credentials:

import hudson.model.*
import jenkins.model.*
import com.cloudbees.plugins.credentials.CredentialsScope
import com.cloudbees.plugins.credentials.domains.Domain
import com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl

def domain = Domain.global()
def store = Jenkins.instance.getExtensionList('com.cloudbees.plugins.credentials.SystemCredentialsProvider')[0].getStore()

def instance = System.getenv("JENKINS_INSTANCE_NAME").replaceAll('-','_')

ConfigObject conf = new ConfigSlurper().parse(new File(System.getenv("JENKINS_HOME")+'/jenkins_config/credentials.txt').text)

conf.common_credentials.each { key, credentials ->
    println("Adding common credential ${key}")
    store.addCredentials(domain, new UsernamePasswordCredentialsImpl(CredentialsScope.GLOBAL, key, credentials.description, credentials.username, credentials.password))
}


conf."${instance}_credentials".each { key, credentials ->
    println("Adding ${instance} credential ${key}")
    store.addCredentials(domain, new UsernamePasswordCredentialsImpl(CredentialsScope.GLOBAL, key, credentials.description, credentials.username, credentials.password))
}

println("Successfully configured credentials")

The config looks like below:

common_credentials {
    jenkins_service_user = [
        username: 'jenkins_service_user',
        password: '{{with $secret := secret "secret/jenkins/jenkins_service_user" }}{{ $secret.Data.value }}{{end}}',
        description :'for automated jenkins jobs'
    ]

    slack = [
        username: '{{with $secret := secret "secret/slack/user" }}{{ $secret.Data.value }}{{end}}',
        password: '{{with $secret := secret "secret/slack/pass" }}{{ $secret.Data.value }}{{end}}',
        description: 'slack credentials'
    ]
}

kayan_jenkins_credentials  {
    artifactory = [
            username: 'arti',
            password: '{{with $secret := secret "secret/jenkins/artifactory" }}{{ $secret.Data.artifactory_password }}{{end}}',
            description: 'Artifactory credentials'
    ]
}

As you can see, common_credentials is used for all Jenkins instances, but "kayan_jenkins," on top of common one, is additionally using kayan_jenkins_credentials,
which is achieved thanks to dynamically generated properties in groovy.

Let's say you have an environment variable in your Jenkins called JENKINS_INSTANCE_NAME
Now, if you do:

def instance = System.getenv("JENKINS_INSTANCE_NAME").replaceAll('-','_')
conf."${instance}_credentials".each { key, credentials ->

it will only have the conf.kayan_jenkins_credentials property when running inside a 'kayan_jenkins' instance.

Improved Common Config With Exclusions and Inclusions

This could be improved further as shown in this example config:

common_credentials {
    exclude{
        tyrion-jenkins
    }
    data{
        jenkins_service_user = [
             username: 'jenkins_service_user',
             password: '{{with $secret := secret "secret/jenkins/jenkins_service_user" }}{{ $secret.Data.value }}{{end}}',
             description :'for automated jenkins jobs'
         ]
         slack = [
             username: '{{with $secret := secret "secret/slack/user" }}{{ $secret.Data.value }}{{end}}',
             password: '{{with $secret := secret "secret/slack/pass" }}{{ $secret.Data.value }}{{end}}',
             description: 'slack credentials'
         ]
    }
}

custom_credentials  {
    include{
        john-snow-jenkins
        arya-jenkins
        sansa-jenkins
    }
    data{
        artifactory = [
                username: 'arti',
                password: '{{with $secret := secret "secret/jenkins/artifactory" }}{{ $secret.Data.artifactory_password }}{{end}}',
                description: 'Artifactory credentials'
        ]
    }
}

tyrion-jenkins_credentials  {
    data{
       nexus=[
                'username':'deployment',
                'password':'{{with $secret := secret "secret/jenkins/nexus" }}{{ $secret.Data.nexus_password }}{{end}}',
                'description':'Nexus credentials'
        ]

    }
}

We have common_credentials excluding all instances which are not interested in some config, then we have custom_credentials sharing the same config, but including all instances we specify. Note that this eliminates the duplication required in the current version of the config, as for every custom Jenkins instance, you would need to copy-paste the same config.
Yet, you can still have config specific to only one instance. The code is not implemented, at of the time of writing this blog, but it should be fairly simple.

4. Create a Way to Checkout Externalized Source Code and Config and Apply It During Container Instantiation

So how is our config actually applied to Jenkins? Jenkins is loading Groovy scripts from init.groovy.d/, meaning we need our scripts to be there before Jenkins starts.
Somewhere in your entrypoint.sh, you should have these lines added:

#!/usr/bin/env bash

git clone ssh://git@your_scm_here/jenkins_config_as_code.git ${JENKINS_HOME}/jenkins_config
mv ${JENKINS_HOME}/jenkins_config/*.groovy ${JENKINS_HOME}/init.groovy.d/

consul-template \
  -consul-addr "$CONSUL_ADDR" \
  -vault-addr "$VAULT_ADDR" \
  -config "jenkins_config.hcl" \
  -once

We checkout the code and config and call consul-template to populate secrets as we can't keep them in the repo, then when Jenkins runs, it will load every script and the scripts will load the config and make the necessary API calls. The checkout obviously that a requires.ssh/id_rsa file exists before the call, so you need to ensure it is either set up by another consul-template call preceding checkout, or use Kubernetes secrets and mount a volume with that file.

5. Apply Source Code or Config Changes When Required With a Jenkins Job

Our final goal is to be able to update Jenkins at any time - on the fly, as we want no downtime.

node {
    stage('checkout') {

        sh '''

    git clone ssh://git@your_scm_here/jenkins_config_as_code.git ${JENKINS_HOME}/jenkins_config
mv ${JENKINS_HOME}/jenkins_config/*.groovy ${JENKINS_HOME}/init.groovy.d/

'''
    }

    stage('run consul template'){
        sh '''
consul-template \
  -consul-addr "$CONSUL_ADDR" \
  -vault-addr "$VAULT_ADDR" \
  -config "jenkins_config.hcl" \
  -once        
        '''
    }

    stage('update credentials') {
        load("/var/jenkins_home/init.groovy.d/credentials.groovy")
    }

    stage('update k8s') {
        load("/var/jenkins_home/init.groovy.d/kubernetes.groovy")
    }

}

Once again, we checkout the updated code/config, run consul-template, and in the final step, call load so Groovy code can be executed. With this model in place, we could easily run additional test steps before applying config to Jenkins; for example, we can spin up a test Jenkins container right from the job, apply config to it, add some tests to check if it is configured as expected, and only after that, apply changes to our actual Jenkins. But even if we don't, should something go wrong (for example, consul-template), the job will fail and we will see immediately what needs to be changed, as opposed to building a new image, deploying, and finding out that it is actually broken, fixing the issue, rebuilding the image, deploying again... the vicious cycle!

All we need to do is just run this job every time the config has changed, or ever trigger the job automatically when PR is merged into the master of config_repo:

And Jenkins will get updated!

Hardening Security

One thing is worth mentioning about the last step: because this job is exposing some internals of Jenkins, from a security perspective it is a bit risky, so I would recommend configuring it with project-based security:

That will ensure only admins can see and run this job.

I hope you enjoyed reading this and found something that you were looking for. The git repo with all examples is available on my GitHub.

Happy coding!

Find out more about how Scalyr built a proprietary database that does not use text indexing for their log management tool.

Topics:
devops ,tutorial ,jenkins ,deployment ,configuration as code

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}