Continuous Delivery With Jenkins - Part 3
Continuous Delivery With Jenkins - Part 3
This is Part 3 of a new series of blog posts about Continuous Delivery with Jenkins and the new Pipeline feature. Follow along in the coming weeks to learn more!
Join the DZone community and get the full member experience.Join For Free
Is the concept of adopting a continuous everything model a daunting task for your fast moving business? Read this whitepaper to break down and understand one of the key pillars of this model in Continuous Governance: The Guardrails for Continuous Everything.
Continuously Deliver Using Pipeline
The Pipeline feature addresses requirements raised in the prerequisites section. The pipeline is defined through a new job type called Pipeline. The flow definition is captured in a Groovy script, thus adding control flow capabilities such as loops, forks and retries. Users can define pipelines that involve multiple stages and stages can throttle concurrency. Users have access to standard Jenkins project concepts such as source code, artifacts, publishers, etc. Users can explicitly allocate slave nodes and workspaces any place within the flow definition. The execution can be paused for human input, can survive restarts of Jenkins masters or slaves and can restart from a user-defined checkpoint.
Key Pipeline Concepts
Pipeline Job Type
There is just one job to capture the entire software delivery pipeline in an organization. Of course, you can still connect two pipeline job types together if you want. A Pipeline job type uses a Groovy-based DSL for job definitions. The DSL affords the advantage to define jobs programmatically:
Image 1: Sample script.
Intra-organizational (or conceptual) boundaries are captured through a primitive called “stages.” A deployment pipeline consists of various stages, where each subsequent stage builds on the previous one. The idea is to spend as few resources as possible early in the pipeline and find obvious issues, rather than spend a lot of computing resources for something that is ultimately discovered to be broken.
Image 2: Two-stage pipeline.
Consider a simple pipeline with three stages shown in Figure 6: Build, Selenium Test, Deploy. A naive implementation of this pipeline can sequentially trigger each stage on every commit. Thus, the deployment step is triggered immediately after the Selenium test steps are complete. However, this would mean that the deployment from commit two overrides the last deployment in motion from commit one. The right approach is for commits two and three to wait for the deployment from commit one to complete, consolidate all the changes that have happened since commit one and trigger the deployment. If there is an issue, developers can easily figure out if the issue was introduced in commit two or commit three. Jenkins Pipeline provides this functionality by enhancing the stage primitive. For example, a stage can have a concurrency level of one defined to indicate that at any point only one thread should be running through the stage. This achieves the desired state of running a deployment as fast as it should run.
Image 3: Pipeline execution.
Image 4: Three stage pipeline with throttled staging.
Gates and Approvals
Continuous delivery means having binaries in a release ready state whereas continuous deployment means pushing the binaries to production - or automated deployments. Although continuous deployment is a sexy term and a desired state, in reality organizations still want a human to give the final approval before bits are pushed to production. This is captured through the “input” primitive in Pipeline. The input step can wait indefinitely for a human to intervene.
Image 5: Waiting on human approval
Deployment of Artifacts to Staging/Production
Deployment of binaries is the last mile in a pipeline. The numerous servers employed within the organization and available in the market make it difficult to employ a uniform deployment step. Today, these are solved by third-party deployer products whose job it is to focus on deployment of a particular stack to a data center. Teams can write their own extensions to hook into the Pipeline job type and make the deployment easier.
Meanwhile, job creators can write a plain old Groovy function to define any custom steps that can deploy (or undeploy) artifacts from production.
Image 6: Functions to perform deploys and undeploys.
CloudBees Jenkins Platform Pipeline Complements
The Pipeline engine is available in open source Jenkins.1 Organizations can create entire pipelines with it. A product offered by CloudBees – CloudBees Jenkins Platform - introduces enterprise features on top of open source Jenkins.2
Enterprises that desire enhanced security can use the Role-based Access Control plugin from CloudBees.3 Typically, enterprise sandbox teams use the Folders plugin4 and tie access control rights to a particular folder. For a typical deployment-to-production phase, organizations prefer to lockdown the slave to a particular operations team because the slave has security credentials within production environments. This is achieved through the Folders Plus plugin from CloudBees.5
Image 7: Configuring roles.
Visualizing the Pipeline
Build managers desire a programmatic approach for pipeline definitions. This is an approach that Pipeline provides. Visualization helps answer easy questions such as: “What changesets are in the pipeline?” to more complex questions, such as: “What phase in my pipeline is dragging my delivery down?” Open source Jenkins comes with a simple table of executed steps, whereas the Pipeline Stage View included with CloudBees Jenkins Platform provides an intuitive visualization.
The Pipeline Stage View helps developers see how far their changes went within the pipeline. Build managers/managers can see how each pipeline stage performs and even compare it to historical performance. The net result is companies can easily get to the heart of issues encountered within their delivery pipelines.
The Pipeline Stage View also makes it easier to view logs by stages, restart pipelines from checkpointed locations and view and restart pipelines paused for human input.
Image 8: Pipeline Stage View.
Surviving Restarts and Intra-Job Resumption with Checkpoints
Today, users can use the Long-Running Build6 job type within the Long Running Build plugin to survive master failures. Jobs need to be restarted on a master failure, if using any other job type. For regular jobs, the Restart Aborted Builds7 plugin also included with CloudBees Jenkins Platform provides a list of builds that were running when the master failed so that an administrator can quickly start them. However, these options aren’t enough to survive slave failures.
The Pipeline implementation persists the state of the flow. This allows a job to survive master and slave failures. With the checkpoint (Restartable Pipeline) step, available within CloudBees Jenkins Platform, users can tag checkpoints in a flow and teams can start the build from any previously checkpointed location.
Thus, enterprises can freely build multi-day flows without worrying about the impact on delivery schedules in the case of a master or slave outage.
Enterprises typically split up masters across departments for manageability or resiliency.8 CloudBees Jenkins Operations Center9 enables operation at scale for these enterprises. The software enables referencing and triggering jobs across masters. Thus, enterprises can enable scenarios such as triggering workflows within different organizations.
Image 9: Enabling Operation of Jenkins at Scale - Managing Jenkins Cross-Departments.
8 Building resilient Jenkins architectures http://pages.cloudbees.com/rs/cloudbees/images/Jenkins-Reference-Architecture.pdf
10 Feature available in Q1 2015
Published at DZone with permission of Hannah Inman , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.