Achieving CI and CD With Kubernetes
Achieving CI and CD With Kubernetes
Jenkins is a popular CI and deployment tool and Kubernetes is a popular orchestration engine for containers. Together, they can help you achieve optimal CI and CD.
Join the DZone community and get the full member experience.Join For Free
Easily enforce open source policies in real time and reduce MTTRs from six weeks to six seconds with the Sonatype Nexus Platform. See for yourself - Free Vulnerability Scanner.
Continuous Integration and Delivery are best described by Martin Fowler:
“Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.”
In this article, we are going to discuss and explore two amazing and rather interesting pieces technology. One is Jenkins, a popular Continuous Integration and deployment tool, and the other is Kubernetes, a popular orchestration engine for containers. As an added bonus, we are also going to discover fabric8 — an awesome tool for microservices platforms. Let’s get started!
Warning: Your machine may hang several times while performing the below steps. Choose a PC with high configuration.
Overview of Architecture
Before starting our work, let’s take a moment and analyze the workflow required to start using Kubernetes containers with Jenkins. Kubernetes is an amazing orchestration engine for containers developed by an amazing open- source community. The fact that Kubernetes was started by Google gives Kubernetes an amazing advantage for using multiple open source container projects. By default, Docker is more supported by and used with Kubernetes. The workflow with Docker containers looks like this:
This is similar to using rkt containers (rktnetes). Here’s the architecture:
Setting Up Kubernetes on the Host Machine
Setting up Kubernetes on your host machine is an easy task. If you'd like to try it out on your local machine, I would recommend you try out Minikube. Here is a quick guide to get you started with setting up Minikube on your local machine:
Ensure that kubectl is installed.
Check out this documentation.
To download prerequisites, check out this documentation.
Download and install Minikube.
Carlossg has done amazing work in the direction of using Jenkins and Kubernetes. He has built an awesome Kubernetes plugin for Jenkins. Using this plugin, you can easily start using Jenkins with Kubernetes directly. Also, to provide users with more easy options for configuring, he has built a Jenkins image that contains the Kubernetes plugin by default. This image is available at Docker hub. In the next steps, we are going to fetch this image from Docker hub and create a volume called
/var/jenkins_home for storing all your Jenkins data.
There's One Problem...
Although we are doing everything as we planned to do, we will still run into a problem. You will notice that whenever you are about to restart your Jenkins container after closing it down, all your data is lost. Whatever you've done — like creating jobs, installing plugins, etc. — will be lost. This is one of the common problems with containers. Let’s discuss it in a bit more depth.
A Word About Data Containers
Data is a tricky concept when it comes to containers. The containers are not very good example of keeping data secure and available all the time. There have been many incidents in the past where the containers have been seen to leak data. There are many ways to deal with such a problem. One is to use Docker volumes. I did not find it that useful when used in terms of persistent storage. One of the ways I found to deal with persistent storage is to create another container called a data container, and use it as a source of storing data instead of depending only one image. Here’s a simple figure on how we plan to use the data container to ensure reliability of our data:
Here are the steps to start using the Jenkins Kubernetes image:
//Running jenkins using another container containing data $ docker run --volumes-from jenkins-k8s -p 8080:8080 -p 50000:50000 -v /var/jenkins_home csanchez/jenkins-kubernetes //Created a container for containing jenkins data with the image name csanchez/jenkins-kubernetes $ docker create --name jenkins-k8s csanchez/jenkins-kubernetes
Open http://localhost:8080 in your browser. You should see the below screen:
Configuring Settings for Kubernetes Over Jenkins
Now, Jenkins is pre-configured with Kubernetes plugin. So, let’s jump to the next step. Using the Jenkins GUI, go to Manage Jenkins > Configure System > Cloud > Add a new Cloud > Kubernetes. The screen looks like this after you have followed the above steps:
Now, fill up your configuration settings according to the the picture below:
If you wish to use Jenkins slave, you can use the jnlp-slave image on Docker hub. This is a simple image that is used to set up slave node templates for you. You configure a slave pod by creating a template, as shown in the figure below:
In order to use Jenkins slave on the run, while creating a new job on Jenkins, do this under configure settings of your job:
Now, just put the name of the label you put in Kubernetes pod template under the restrict section. Save and apply the settings for your new job. When building this job, you should see the slave node running.
That’s all, folks! You are ready to go. You can now add more of your plugins as per your needs.
Fabric8 is an open-source microservices platform based on Docker, Kubernetes, and Jenkins. It is built by the Red Hat guys. The purpose of the project is to make it easy to create microservices, build, test, and deploy them via Continuous Delivery pipelines, then run and manage them with Continuous Improvement and ChatOps.
Fabric8 installs and configures the following things for you automatically
Here’s a brief picture of the architecture of Fabric8:
$ sudo cp /usr/local/bin/ gofabric8
You can check its installation by running
$ gofabric8 on your terminal. Now, run the following commands:
$ gofabric8 deploy -y
Your terminal screen should look like this:
$ gofabric8 secrets -y
Your terminal screen should look like this
Check the status of pods using kubectl.
You can check out the status of your pods by opening the kubernetes dashboard in a browser: http://192.168.99.100:30000.
Similarly, you can open the fabric8 hawtio browser interface.
From my analysis, here’s a depiction of what happened when you ran the above commands.Below is a simple workflow diagram for the same.
It's easier said than done. Building Jenkins from the source and integrating Kubernetes is one part of the story, but achieving Continous Delivery with your setup is another very different and more complex part of the story.
Here are some of my tips on using certain plugins that could help you in achieving Continous Delivery with Jenkins.
Pipeline is a core plugin built by the Jenkins community. This plugin ensures that you can easily integrate any orchestration engine with your environment with very less complexity. Currently, I believe this was started as different communities had started building different plugins for various engines and they had to depend on majorly the Jenkins UI part to do so. Using this plugin, users now can now directly implement their project’s entire build/test/deploy pipeline in a Jenkinsfile and store that alongside their code, treating their pipeline as another piece of code to be checked into source control.
These days, most companies are using GitHub as SCM tool. I recommend you use the GitHub plugin. This plugin helps you push the code from GitHub and analyze and test it over Jenkins. For authentication purposes, I would recommend you to look GitHub OAuth plugin.
For Docker guys, this is one of the most suitable plugins that helps you do almost everything with Docker. This plugin also helps you use Docker containers as slaves. There are several other Docker plugins that according to time and your usage you can switch over with.
The AWS guys have introduced an awesome service called the AWS Pipeline. This particular service helps you to attain Continuous Delivery with your AWS setup. Currently, this plugin is under heavy development and might not be suitable for production environments. Also, check out AWS CodeCommit.
For OpenStack users, the OpenStack plugin is suitable to configure your OpenStack settings with your openstack enviorment.
Google Cloud Platform
This deployment manager is a service started by the Google Cloud Platform. Using the deployment manager, you can create flexible and declarative templates that can deploy a variety of cloud platform services such as Google Cloud Storage, Google Compute Engine, and Google Cloud SQL, and leave it to Deployment Manager to manage the Cloud Platform resources defined in your templates as deployments. This is a very new plugin, but I think it is worth a try if you wish to automate and sort things out with Google Cloud Platform.
I hope you enjoyed reading this article. Please let me know your valuable thoughts in the comments section below.
Published at DZone with permission of Ramit Surana , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.