Recently released Jenkins 2.0, has focused on solving the problem for organiZations wanting to continuously deliver software. This blog gives a quick overview of key benefits of Jenkins Pipelines.
Why Continuous Delivery (CD)?
There is a ton of documentation on the internet that outlines the benefit of CD so I am going to skip regurgitating. Suffice it to say that if you are looking at accelerating the pace of software delivery, having your code in an always shippable state is the way to do so and CD is the way to keep your code always shippable.
The Iron Triangle of software delivery made the case that you can only choose two between good, fast and cheap. CD makes the case you don’t have to choose but you can actually deliver on all 3 (good, fast, cheap) and in fact continuously improve on each of the three axis.
Software goes through various phases (build, test, deploy) and interacts with a multitude of tools (junit, sonar, nexus etc) on its way to production. Jenkins is the orchestrator that drives the entire pipeline and interacts with each of these tools. Jenkins’ strength is that it has 1200 plugins that let you interact with any of the tools that are in your organization.
Jenkins 2.0 has introduced a key area called Pipelines. With Pipelines, organizations can define their delivery pipeline through a DSL (Pipeline-as-code). Pipelines, thus, can be versioned, checked into source and easily shared within an organization.
Pipelines model your delivery process (value stream) rather straight-jacketing you in “opinionated” process
Typical delivery value stream within organizations have one commonality - atypicality. Most of the delivery processes are dissimilar unlike the canonical “build, test, deploy” examples that you see in examples. Here is an example from an earlier blog from Viktor Farcic on a delivery pipeline.
The Pipeline DSL helps you capture complex process requirements through code - thus you can try-catch on deployment failures, loop through deployments, run tests in parallel. It brings the power of a programming language (groovy) to do so. At the same time, the DSL is simple enough to capture simple cases easily without having to touch groovy code. You can capture common patterns in functions and keep them in a global library so that new applications can build on these functions rather than re-invent.
Pipelines continue on despite infrastructure failures
A number of applications have pipelines that run into multiple days. A pipeline typically runs on Jenkins executors (formerly known as slaves) and it will continue running even in case of a master failure. You can imagine the productivity benefits on an existing pipeline where if a master crashed (itself or due to infrastructure failure) on day 4 of day 7 and it didn’t have to restart from day 1 - phenomenal.
If you use the CloudBees Jenkins Platform, you can checkpoint your pipelines so if an executor fails, you can pick from the last checkpointed location instead of re-running the pipeline.
Analyse and optimise your value stream process
Optimising the value stream process is the next logical step after modeling your delivery process, Pipeline Stage View helps you analyse the process delivery across multiple runs. You can see which stages consume the most time, which stages are blocked on manual user input and so on. Thus, you can quickly hone in on a problematic phase and optimise it.
Developers can quickly get an insight into how far is their code into the pipeline. Teams waiting on artifact delivery on the code can also see where in the pipeline is the code.
Jenkins Pipeline brings in native support for pipelines into Jenkins. It is aimed at an audience that wants to continuously deliver software and is well worth a spin.