This article is the follow-up from The Need For Jenkins Pipeline post.
The Jenkins Build Flow Plugin, started by Nicolas De Loof, was a huge success and, as a result, Koshuke decided to start fresh. At this moment, I imagine you, dear reader, rolling your eyes and wondering why would someone start over if a project is a success?
Build Flow project began during 2012, and it was, from the very beginning, considered a proof of concept. The central idea behind the plugin was to define the CD pipeline through code. As a way to simplify coding, Nicholas created a Groovy-based DSL. The response from the community was very positive, and the plugin received a broad adoption that demonstrated that the direction of the plugin was correct and should be explored more. However, the plugin hit some technical limitations that prevented the community from developing it further. As a result, the decision was made to start fresh using the knowledge and experience obtained through the Build Flow Plugin.
The Jenkins Workflow Plugin was born, and, later, renamed to Pipeline Plugin. It continues maintaining the core idea of the Build Flow Plugin while the previous experience allowed the contributors to avoid some of the mistakes and significantly improve the design.
Key Features of the Jenkins Pipeline Plugin
The principal characteristic of the Pipeline plugin is that the deployment flow is defined through code. The plugin is based on Groovy DSL that can be used to specify build steps. The whole flow that would typically require many “standard” Jenkins jobs chained together can be expressed as a single script. The Groovy-based DSL syntax allows us to combine the best of both worlds. Through DSL, we have simplicity in defining commonly used tasks like access to an SCM repository, the definition of nodes tasks should run on, parallel execution, and so on. On the other hand, since the plugin uses Groovy, almost any operation can be defined with relative ease. We can, finally, use conditionals, loops, variables, and so on. Since Groovy is an integral part of Jenkins, we can also use it to access almost any existing plugin, or even Jenkins core features. If you are not familiar with Groovy (and do not want to get to know it), in most cases you will be able to accomplish everything through the DSL. It can be considered a new language with a very simple syntax. The decision to create a new DSL was, indeed, a good one. Domain-specific languages have been around for a long time and proved to be more efficient at defining very precise sets of tasks.
The plugin was designed in a way that it can be easily extended. Even though it has been in use only for a short time, we’ve seen many contributions. While we can use Groovy to access any plugin, the long-term plan is to extend the DSL so that all commonly used ones are defined through it.
Since the whole delivery flow is defined as a code in plain text format, storing the scripts in SCM is not only available but highly recommended. By storing the Pipeline scripts in, let’s say, Git, we can apply the same process as with any other code. We can commit it to the repository, use pull requests, code reviews, and so on. Moreover, the Multibranch Pipeline Plugin allows us to store the script in Jenkinsfile and define different flows inside each branch.
I hope that you can already see that it comes with many advantages over more “traditional” ways of defining jobs. Indeed, the Pipeline plugin opened some doors that were previously closed or very hard to pass through. It brought Jenkins to a whole new level proving, once again, that it is the leader among CI/CD tools.
That was only a sneak peak of the Pipeline plugin capabilities. We’ll discuss its syntax, capabilities, and features in more detail in the next article.
The DevOps 2.0 Toolkit
If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book. (Disclosure: I wrote this book.) Among many other subjects, it explores Jenkins, the Pipeline plugin, and the ecosystem around it in much more detail.
This book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It’s about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It’s about scaling to any number of servers, the design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.
In other words, this book envelops the whole microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We’ll use Docker, Kubernetes, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, and so on. We’ll go through many practices and, even more, tools.