Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Move Toward Next-Generation Pipelines

DZone's Guide to

Move Toward Next-Generation Pipelines

Take a look at what the adoption of a Tekton integration by Jenkins means for improving and future pipelines.

· DevOps Zone ·
Free Resource

Is the concept of adopting a continuous everything model a daunting task for your fast moving business? Read this whitepaper to break down and understand one of the key pillars of this model in Continuous Governance: The Guardrails for Continuous Everything.

Jenkinsx.io published a blog post announcing the preview of Tekton integration in the next generation cloud-native pipeline execution engine. I'm very happy to see this happening. Let me explain why.

In 2011, when I joined CloudBees, I started implementing a job orchestration plugin that I felt was required to replace tons of "trigger another job" use cases I've seen implemented by various plugins. My Build-flow plugin relied on a plain Java thread to execute Groovy script without any security barriers to internal Jenkins API abuses. As my first Jenkins plugin, the implementation was crappy and demonstrated terrible security impacts and design flaws, so today this plugin is abandoned/deprecated.

If you like archeology and want to discover how terrible my English was at this time, watch this video from the Jenkins User Event Paris 2012. While being technically at a dead-end, it demonstrated a possible route toward a DSL to orchestrate Jenkins builds, and got huge adoption. But build-flow has been a useful inspiration and showed proof of user interest when designing the Jenkins 2.x pipeline engine, which required some huge investment into jenkins-core.

Pipeline execution engine, in comparison with build-flow plain Groovy executor, uses a fine-grained sandbox to enforce security and a "Continuation Passing Style" runner to parse Groovy syntax and execute every command as a resumable operation, implemented by adequate Jenkins plugins. As a result, there's no actual Groovy thread running your Pipeline and a single master can run thousands of them. There's no single line of code shared between build-flow and Pipeline, but I still feel proud my ugly code has been used as a proof of concept for a better, successful Pipeline execution engine.

This doesn't make Pipeline the definitive answer to build orchestration. Pipeline relies on a scripting language, and when you give developers a programming language, they write code. As a result, some of them designed an unmanageable beast of thousands of lines of Groovy script. Shared libraries have been proposed as a way to refactor those into simpler scripts, but in the meantime, Andrew Bayer and his team came with a new project: Declarative pipeline.

Scripted pipeline makes it possible to execute anything Jenkins can do within a Pipeline, while the Declarative pipeline has been designed to only focus on the simpler usages — which covers ~90% of the needs. For those, Declarative pipeline offers both a simplified syntax, and an alternative approach to orchestration definition that isn't based on imperative scripting, but a declarative definition of build steps and involved resources. In addition to making things simpler, this also brings some canonical structure to Pipelines.

Declarative pipeline doesn't use a distinct execution engine. It uses the exact same sandboxing and "CPS" Groovy execution as a scripted pipeline. But thanks to its declarative syntax, it allows a simpler and homogeneous syntax as well as reasonable default features, to make users' life easier.

Let's now have a look at what happened in the meantime in the Kubernetes ecosystem to cover the same needs.

KNative is a project (initiated by Google and Pivotal) to offer serverless capabilities on top of a Kubernetes cluster. As part of the project, Knative Build is responsible for running a set of commands in transient Docker images within a Kubernetes pod. The scope is pretty limited to the simplest builds to check out code, build, test and create a Docker image. But that still covers the needs for most of us.

Fun fact: Kubernetes focus apart, Knative Build seems comparable to my own docker-pipeline. Unfortunately, this one never gained attention and remained just a proof of concept.

Then Google started working with other contributing companies on a larger idea of Pipeline support in Knative, code name Tekton (from τέκτων: greek noun for carpenter: the guy to build your ship!) "Build-Pipeline" probably was a more obvious name about what it does, but you know, naming things is one of the harder problem in our industry.

Tekton pipelines do run on Kubernetes, rely on containers as building blocks to execute commands, and offer abstractions for build resources like where to get codebase (Git) and build output (Docker images). As a native Kubernetes project, it relies on Custom Resource Definitions (CRD) for configuration and state. As a result, using Tekton to define your build requires a significant set of yaml:

  • Definitions for individual Tasks (sequence of containers and commands to execute)
  • PipelineResources: Git repository to checkout, Docker image to build
  • Pipeline to link them all together
  • PipelineRun to manage execution and status of the pipeline

To get further into the rabbit hole, we need to better understand how Jenkins X is running Pipelines, especially the Serverless Jenkins flavor of Jenkins X. In this mode, Jenkins X doesn't run a Jenkins master full-time waiting for jobs to be scheduled. It reacts GitHub events to create a transient "one-shot" master using Jenkinsfile Runner to execute the Pipeline Script in an isolated context.


Jenkinsfile Runner is a custom packaging of Jenkins to immediately run a single Pipeline after the boot sequence, applying a local Jenkinsfile script. Internally it uses the exact same Pipeline execution engine that is used by classic Jenkins. But in this context, as Jenkinsfile Runner is executed in isolation in a one-shot container, sandboxing and security barriers are not relevant anymore and the architecture could be significantly simplified.

In parallel, Declarative Pipeline demonstrated reducing the scope of Pipeline syntax allowed it to both make the script way simpler to write and implementation more flexible.

Here Comes the "Next Generation Pipeline" Effort

NG Pipeline (or whatever final name will be used for this) wants to define an abstract syntax for Pipeline definition, fully descriptive, so that we get the flexibility to adopt the adequate implementation and make architectural changes when needed.

Next Generation Pipeline is defining a simple yaml syntax to define abstract Pipelines, without exposing implementation details:


pipelineConfig:
  pipelines:
    release:
      pipeline:
        agent:
          image: nodejs
        stages:
          - name: Build
            steps:
              - command: echo
                args:
                  - hello world
Note: This is an early preview of the proposed syntax at the time of writing. It's a Work-in-Progress, subject to changes. You've been warned!


Serverless Jenkins is the first citizen to benefit from this effort, but NG Pipeline could also later be used in other contexts once stabilized. On Serverless Jenkins, NG Pipeline will embrace Tekton. The latter can then be used to run your Jenkins X pipeline, without running a hidden Jenkins master for this purpose. Next-Gen Pipeline will read this jenkins-x.yml Pipeline definition to create a set of CRD for Tekton. You don't really need to worry about this plumbing step; just remember Jenkins X will handle this for you transparently.

The CloudBees team is collaborating with the Google and Kubernetes communities on the Tekton project to ensure the required features will be supported to offer an equivalent of Jenkins Declarative Pipeline.

Once this yaml syntax to support Jenkins pipeline is well-defined and integrated in Jenkins X, we will work on automatically migrating Groovy-based Declarative Jenkinsfiles to this new format. So, from a end-user perspective, investments in Jenkins is still fully relevant, and they will actually benefit from the latest bits from a very active Kubernetes ecosystem.

Integration with Tekton is a great proof of the Jenkins Declarative Pipeline flexibility. Thanks to a declarative approach, the underlying execution engine can we replaced transparently and embrace bleeding-edge solutions on Kubernetes.

I'm super excited to see Tekton being supported by Jenkins X as the next chapter in the already long story of Jenkins build orchestration. And this comes with a smooth migration path for Jenkins users. Jenkins X so far demonstrated an aggressive but very efficient approach to Jenkins "modernization," replacing core components with alternate projects from the Kubernetes community, without breaking the main usage scenario of Jenkins: run your Pipelines!

Additional resources

Are you looking for greater insight into your software development value stream? Check out this whitepaper: DevOps Performance: The Importance of Measuring Throughput and Stability to see how CloudBees DevOptics can give you the visibility to improve your continuous delivery process.

Topics:
jenkins x ,next generation pipelines ,ngp ,jenkins pipeline ,devops

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}