Building a Kubernetes CI/CD Pipeline With GitLab and Helm
The purpose of this article is to show how you can attach Continuous Delivery (CD) to build a CI/CD pipeline so you can deploy your applications to Kubernetes.
Join the DZone community and get the full member experience.
Join For FreeIntroduction
Everyone loves GitLab CI and Kubernetes. GitLab CI (Continuous Integration) is a popular tool for building and testing software developers write for applications. GitLab CI helps developers build code faster, more confidently, and detect errors quickly.
Kubernetes, popularly shortened to K8s, is a portable, extensible, open-source platform for managing containerization workloads and services. K8s is used by companies of all sizes every day to automate deployment, scaling, and managing applications in containers.
The purpose of this article is to show how you can bolt on the Continuous Delivery (CD) piece of the puzzle to build a CI/CD pipeline so you can deploy your applications to Kubernetes. But before we get too far, we're going to need to talk about Helm, which is an important part of the puzzle.
What the Helm?
Helm calls itself the package manager for Kubernetes. That's a pretty accurate description. Helm is a versatile, sturdy tool DevOps consulting services engineers can use to define configuration files in, and perform a variable substitution to create consistent deployments to our clusters, and have different variables for different environments.
It's certainly the right solution to the problem we're covering here.
How Do We Do It?
First off, a few prerequisites. You’re going to have to have this all hammered out before you started with the project. There are links to helpful documentation below if you need help.
- You already have an Amazon EKS cluster.
- You already know how to use GitLab CI.
- You have a GitLab CI runner configured in your Kubernetes cluster.
- You have the AWS Load Balancer Controller running in your cluster.
With those boxes checked, we can get started. You'll want to create a new repository in GitLab first for us to use in this example. Once you've done that we can get started with creating our files.
File Tree
Basically, at the end our folder/file structure is going to look like this:
<dir>
├── chart/
| ├── Chart.yaml
| ├── values.yaml
| └── templates/
| ├── deployment.yaml
| ├── service.yaml
| ├── ingress.yaml
| └── configmap.yaml
└── gitlab-ci.yml
values.yaml
applicationName my-first-app
certArn your-certificate-arn
domain your domain name
subnets your subnets
securityGroups your security groups
deployment.yaml
apiVersion apps/v1
kind Deployment
metadata
name .Values.applicationName
namespace .Values.applicationName
spec
replicas2
revisionHistoryLimit2
selector
matchLabels
app .Values.applicationName
template
metadata
labels
app .Values.applicationName
spec
containers
name .Values.applicationName
imagePullPolicy Always
image nginx1.19.4
ports
containerPort80
volumeMounts
mountPath /usr/share/nginx/html/index.html
name nginx-conf
subPath index.html
volumes
name nginx-conf
configMap
name .Values.applicationName -configmap
This is the configuration file that defines our deployment. You can see there are a few lines with {{ some text }}
. This is how we use a variable we define in our values file within our chart.
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Values.applicationName }}-configmap
namespace: {{ .Values.applicationName }}
data:
index.html: |
<html>
<head>
<h1>My first Helm deployment!</h1>
</head>
<body>
<p>Thanks for checking out my first Helm deployment.</p>
</body>
</html>
This config map just defines a simple index page that we'll display for our app.
service.yaml
apiVersion v1
kind Service
metadata
name .Values.applicationName
namespace .Values.applicationName
spec
ports
port80
targetPort80
protocol TCP
name .Values.applicationName
port80
targetPort80
protocol TCP
name .Values.applicationName
type NodePort
selector
app .Values.applicationName
ingress.yaml
apiVersion extensions/v1beta1
kind Ingress
metadata
name .Values.applicationName
namespace .Values.applicationName
annotations
kubernetes.io/ingress.class alb
alb.ingress.kubernetes.io/subnets .Values.subnets
alb.ingress.kubernetes.io/healthcheck-path /
alb.ingress.kubernetes.io/security-groups .Values.securityGroups
alb.ingress.kubernetes.io/scheme internet-facing
alb.ingress.kubernetes.io/certificate-arn .Values.certArn
alb.ingress.kubernetes.io/listen-ports'[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/actions.ssl-redirect'{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
spec
rules
host .Values.applicationName . .Values.domain
http
paths
path /*
backend
serviceName ssl-redirect
servicePort use-annotation
path /*
backend
serviceName .Values.applicationName
servicePort80
.gitlab-ci.yaml
stages
deploy
variables
DOCKER_HOST tcp //localhost 2375/
DOCKER_DRIVER overlay2
APP_NAME my-first-app
deploy
stage deploy
image alpine/helm3.2.1
script
helm upgrade $ APP_NAME ./charts --install --values=./charts/values.yaml --namespace $ APP_NAME
rules
if $CI_COMMIT_BRANCH == 'master'
when always
We Have All the Files. Now What?
Well, after you have all the files defined and your infrastructure follows our prerequisites, there's not much left to do.
If you commit these files, GitLab will interpret your .gitlab-ci.yml
file and initiate a pipeline. Our pipeline only has one stage and one job (deploy). It'll spin up a container in the cluster for the deployment using the helm:3.2.1
image and run our script
command. This does all of the heavy liftings for us by creating all of the files required in our namespace and starting our application.
If you configure in Route53 a DNS record like my-first-app.my-domain.com
with an A record to the load balancer that the ingress controller created, you'll see the index page we defined in the config map!
Published at DZone with permission of Ian Blyth. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments