CI/CD Pipelines for Python (Flask) — Docker Project to Kubernetes
Second verse, same as the first: making containerization and deployment faster with Docker and Kubernetes.
Join the DZone community and get the full member experience.
Join For FreeIntroduction
This article is in continuation from a previous article named "Setting up CI CD Pipelines for Docker Kubernetes project (hosted on Google Cloud Platform)"
In this article, we containerize (Docker) and deploy a Python Flask application to Kubernetes (Kubernetes Engine on Google Cloud Platform).
The sample project is hosted here on Google Cloud Platform Kubernetes Engine.
Code Usage
The code is hosted here.
Using the code is pretty much similar to what we described earlier in this article. We will set up env_vars/application.properties file for our new project. Then we set up the multibranch pipeline in Jenkins as mentioned in the previous article.
Understanding the Code
env_vars
In order to customize this pipeline for our project, we updated the env_vars/application.properties file according to our project.
Parameter | Function | Example |
APP_NAME | The application name - this will be used to create image name in Jenkins file | kubejencdp |
IMAGE_NAME | This is the image name we want our project to be published at docker registry. | kubejencdp-py |
PROJECT_NAME | Name of the project | amitthk |
DOCKER_REGISTRY_URL |
URL of the Docker registry. e.g. we are using Docker hub here | registry.hub.docker.com |
RELEASE_TAG |
Release tag for Docker image. This can be taken from release branch name as well. | 1.0.0 |
DOCKER_PROJECT_NAMESPACE |
Docker project namespace. | e.g. my account on Docker Hub is amitthk which is also my default namespace |
JENKINS_DOCKER_CREDENTIALS_ID |
This is the username password credential which will be added to Jenkins for login to Docker registry.(If you are using OpenShift, you may want to login with $(oc whoami -t) for token | JENKINS_DOCKER_CREDENTIALS_ID |
JENKINS_GCLOUD_CRED_ID |
This is the Google Cloud Platform service account key which is added to Jenkins as a file credential. For more information please refer here. | JENKINS_GCLOUD_CRED_ID |
JENKINS_GCLOUD_CRED_LOCATION |
Unused. (If you prefer to not add file credential to Jenkins and to store the service account key at Jenkins and directly access from slave then use this) | /var/lib/jenkins/lateral-ceiling-220011-5c9f0bd7782f.json |
GCLOUD_PROJECT_ID |
This is the Google Cloud Project ID | lateral-ceiling-220011 |
GCLOUD_K8S_CLUSTER_NAME |
This is our cluster name on Google Cloud | pyfln-k8s-cluster-dev |
Dockerfile
Let us begin by understanding the Dockerfile we used to containerize our app. Here is a brief description of what this Dockerfile is doing:
We are building on top of a python-27-centos7 image.
We set up some of the overridable arguments which we would during the build.
We take up all the environment parameters from the original python-27-centos7 image so that we are able to find the bin(s) and lib(s) properly.
We add a new user and install some of the required packages like gcc, python-setuptools, etc. required for our build later.
We copy some of the required files and the source code into the context at ${APP_HOME}/ directory.
We use pip to install the dependencies required for our application.
We update the permissions for our executables and the ${APP_HOME} directory.
Our entry point and command we basically start the application inside is "Green Unicorn" or gunicorn
FROM centos/python-27-centos7
ARG APP_HOME=/opt/app-root/src
ARG USER_NAME=pyflnuser
ENV USER_NAME ${USER_NAME}
ENV APP_HOME $APP_HOME
#env from original
ENV APP_ROOT /opt/app-root
ENV PIP_NO_CACHE_DIR off
ENV LIBRARY_PATH /opt/rh/httpd24/root/usr/lib64
ENV PYTHONUNBUFFERED 1
ENV NODEJS_SCL rh-nodejs8
ENV LC_ALL en_US.UTF-8
ENV PYTHONIOENCODING UTF-8
ENV LD_LIBRARY_PATH /opt/rh/python27/root/usr/lib64:/opt/rh/rh-nodejs8/root/usr/lib64:/opt/rh/httpd24/root/usr/lib64
ENV VIRTUAL_ENV /opt/app-root
ENV PYTHON_VERSION 2.7
ENV PATH /opt/app-root/bin:/opt/rh/python27/root/usr/bin:/opt/rh/rh-nodejs8/root/usr/bin:/opt/rh/httpd24/root/usr/bin:/opt/rh/httpd24/root/usr/sbin:/opt/app-root/src/.local/bin/:/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV STI_SCRIPTS_URL image:///usr/libexec/s2i
ENV PWD /opt/app-root/src
ENV STI_SCRIPTS_PATH /usr/libexec/s2i
ENV LANG en_US.UTF-8
ENV SUMMARY Platform for building and running Python 2.7 applications
ENV PS1 (app-root)
ENV HOME /opt/app-root/src
ENV SHLVL 1
ENV PYTHONPATH /opt/rh/rh-nodejs8/root/usr/lib/python2.7/site-packages
ENV XDG_DATA_DIRS /opt/rh/python27/root/usr/share:/usr/local/share:/usr/share
ENV PKG_CONFIG_PATH /opt/rh/python27/root/usr/lib64/pkgconfig:/opt/rh/httpd24/root/usr/lib64/pkgconfig
# ARG PIP_INDEX=https://atksv.mywire.org:8993/nexus/repository/pypi-all/simple
# ENV PIP_INDEX ${PIP_INDEX}
USER root
RUN groupadd -g 10000 ${USER_NAME} && \
useradd --no-log-init -u 10000 -g ${USER_NAME} -ms /bin/bash ${USER_NAME}
# && echo '' >> /etc/resolv.conf && echo "nameserver 8.8.8.8" >> /etc/resolv.conf
# rm /etc/yum.repos.d/Cen* && \
# cp /tmp/centos.repo /etc/yum.repos.d/
# update-ca-trust force-enable
# mkdir -p ~/.pip && cp /tmp/pip/pip.conf ~/.pip/pip.conf && \
RUN yum -y update && \
yum downgrade -y glibc glibc-common glibc-devel glibc-headers && \
yum install -y gcc unzip python-setuptools python-ldap
COPY files /tmp/
COPY app ${APP_HOME}/
RUN bash -c "cd ${APP_HOME} && python -m pip install -r requirements.txt"
RUN cp /tmp/uid_entrypoint ${APP_HOME}/ && \
chmod -R 0775 ${APP_HOME} && \
chown -R ${USER_NAME}:${USER_NAME} ${APP_HOME} && \
chmod -R g=u /etc/passwd
EXPOSE 8000
WORKDIR ${APP_HOME}
ENTRYPOINT ["./uid_entrypoint"]
CMD ["gunicorn", "-b", "0.0.0.0:8000","wsgi:app"]
Kubernetes Deployment, Service and Ingress
Our Deployment, Service and Route files are pretty much the same as in the starter article. We only updated the parameters we pass to these templates. Let us take a look at the code of these files.
Deployment
In our deployment, we create the deployment with the name __APP_NAME__-dc
. Variable __APP_NAME__ is replaced with our parameter kubejencdp-py by our template processing script.
We deploy 1 replica with the container image kubejencdp-py
(the __IMAGE__
variable will be updated by image name by template processing script). We are passing the __TIMESTAMP__
variable which is updated by our pipeline with the timestamp of the deployment. This ensures that we pull the latest image even if we apply the same deployment. You can find more information about this trick in this discussion on Github. We expose the port 8000 as exposed by the container.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: __APP_NAME__-dc
labels:
app: __APP_NAME__
spec:
replicas: 1
template:
metadata:
labels:
app: __APP_NAME__
updateTimestamp: "__TIMESTAMP__"
spec:
containers:
- name: __APP_NAME__-ctr
image: >-
__IMAGE__
env:
- name: DEPLOY_TIMESTAMP
value: "__TIMESTAMP__"
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8000
Service
Our service is pretty straightforward it exposes port 8000 which is the port of our pod deployed with deployment above.
apiVersion: v1
kind: Service
metadata:
labels:
app: __APP_NAME__
updateTimestamp: "__TIMESTAMP__"
name: __APP_NAME__-svc
spec:
ports:
- name: http-port
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: __APP_NAME__
sessionAffinity: None
type: NodePort
Ingress
Our ingress exposes the http-port of our service outside the cluster:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: __APP_NAME__
updateTimestamp: "__TIMESTAMP__"
name: __APP_NAME__-ingress
spec:
backend:
serviceName: __APP_NAME__-svc
servicePort: http-port
Jenkins Pipeline
Initialization
In our initalization stage we basically take most of the parameters from the env_vars/application.properties files as described above. The timestamp is taken from the wrapper script below:
def getTimeStamp(){
return sh (script: "date +'%Y%m%d%H%M%S%N' | sed 's/[0-9][0-9][0-9][0-9][0-9][0-9]\$//g'", returnStdout: true);
}
And the following function reads the values from env_vars/application.properties file:
def getEnvVar(String paramName){
return sh (script: "grep '${paramName}' env_vars/project.properties|cut -d'=' -f2", returnStdout: true).trim();
}
Here's our initialization stage:
stage('Init'){
steps{
//checkout scm;
script{
env.BASE_DIR = pwd()
env.CURRENT_BRANCH = env.BRANCH_NAME
env.IMAGE_TAG = getImageTag(env.CURRENT_BRANCH)
env.TIMESTAMP = getTimeStamp();
env.APP_NAME= getEnvVar('APP_NAME')
...
...
env.GCLOUD_K8S_CLUSTER_NAME = getEnvVar('GCLOUD_K8S_CLUSTER_NAME')
env.JENKINS_GCLOUD_CRED_LOCATION = getEnvVar('JENKINS_GCLOUD_CRED_LOCATION')
}
}
}
Cleanup
Our cleanup script simply clears our any dangling or stale images.
stage('Cleanup'){
steps{
sh '''
docker rmi $(docker images -f 'dangling=true' -q) || true
docker rmi $(docker images | sed 1,2d | awk '{print $3}') || true
'''
}
}
Build
Here we docker build
our project. Please notice that since we will be pushing our image to Dockerhub , the tag we are using contains DOCKER_REGISTRY_URL
which is registry.hub.docker.com and my DOCKER_PROJECT_NAMESPACE
is amitthk. You may want to update these values according to your Docker registry.
stage('Build'){
steps{
withEnv(["APP_NAME=${APP_NAME}", "PROJECT_NAME=${PROJECT_NAME}"]){
sh '''
docker build -t ${DOCKER_REGISTRY_URL}/${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG} --build-arg APP_NAME=${IMAGE_NAME} -f app/Dockerfile app/.
'''
}
}
}
Publish
In order to publish our image to Docker registry, we make use of Jenkins credentials defined with variable JENKINS_DOCKER_CREDENTIALS_ID
. To understand how this is set up, please refer to the first article.
stage('Publish'){
steps{
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: "${JENKINS_DOCKER_CREDENTIALS_ID}", usernameVariable: 'DOCKER_USERNAME', passwordVariable: 'DOCKER_PASSWD']])
{
sh '''
echo $DOCKER_PASSWD | docker login --username ${DOCKER_USERNAME} --password-stdin ${DOCKER_REGISTRY_URL}
docker push ${DOCKER_REGISTRY_URL}/${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG}
docker logout
'''
}
}
}
Deploy
In our Deploy stage, we make use of Jenkins secret file credential set up in the JENKINS_GCLOUD_CRED_ID
variable. Again, to check how this variable is set up, please refer to first article.
For deployment, we process our deployment, service, and ingress files mentioned above using our simple script named process_files.sh. This script simply replaces some of the build/deployment variables like __APP_NAME__
, __TIMESTAMP__
, __IMAGE__
, etc. We want to update our deployment/service/ingress with:
if (($# <5))
then
echo "Usage : $0 <DOCKER_PROJECT_NAME> <APP_NAME> <IMAGE_TAG> <directory containing k8s files> <TIMESTAMP>"
exit 1
fi
PROJECT_NAME=$1
APP_NAME=$2
IMAGE=$3
WORK_DIR=$4
TIMESTAMP=$5
main(){
find $WORK_DIR -name *.yml -type f -exec sed -i.bak1 's#__PROJECT_NAME__#'$PROJECT_NAME'#' {} \;
find $WORK_DIR -name *.yml -type f -exec sed -i.bak2 's#__APP_NAME__#'$APP_NAME'#' {} \;
find $WORK_DIR -name *.yml -type f -exec sed -i.bak3 's#__IMAGE__#'$IMAGE'#' {} \;
find $WORK_DIR -name *.yml -type f -exec sed -i.bak3 's#__TIMESTAMP__#'$TIMESTAMP'#' {} \;
}
main
And here is our Deployment stage. We activate our Gcloud credential, we process our templates using the process_files.sh script mentioned above, then we use kubectl to apply our processed templates. We watch our rollout using kubectl rollout status command :
stage('Deploy'){
steps{
withCredentials([file(credentialsId: "${JENKINS_GCLOUD_CRED_ID}", variable: 'JENKINSGCLOUDCREDENTIAL')])
{
sh """
#!/bin/sh -xe
gcloud auth activate-service-account --key-file=${JENKINSGCLOUDCREDENTIAL}
gcloud config set compute/zone asia-southeast1-a
gcloud config set compute/region asia-southeast1
gcloud config set project ${GCLOUD_PROJECT_ID}
gcloud container clusters get-credentials ${GCLOUD_K8S_CLUSTER_NAME}
chmod +x $BASE_DIR/k8s/process_files.sh
cd $BASE_DIR/k8s/
./process_files.sh $GCLOUD_PROJECT_ID ${IMAGE_NAME} "${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG}" "./${IMAGE_NAME}/" $TIMESTAMP
cd $BASE_DIR/k8s/$IMAGE_NAME/.
kubectl apply --force=true --all=true --record=true -f $BASE_DIR/k8s/$IMAGE_NAME/
kubectl rollout status --watch=true --v=8 -f $BASE_DIR/k8s/$IMAGE_NAME/$IMAGE_NAME-deployment.yml
gcloud auth revoke --all
"""
}
}
}
Conclusion
We completed the containerization, build and deployment of a simple Python flask application to Kubernetes. The Kubernetes engine used is from Google Cloud Platform.
References
Opinions expressed by DZone contributors are their own.
Comments