DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. DevOps and CI/CD
  4. CI/CD Pipelines for Java(Maven) - Docker Project to Kubernetes

CI/CD Pipelines for Java(Maven) - Docker Project to Kubernetes

Contine constructing faster pipelines by creating a CI/CD pipeline for this Java Maven project.

Amit Thakur user avatar by
Amit Thakur
CORE ·
Jan. 10, 19 · Tutorial
Like (6)
Save
Tweet
Share
17.88K Views

Join the DZone community and get the full member experience.

Join For Free

Introduction

This article is a continuation from a previous article named "Setting up CI/CD Pipelines for Docker Kubernetes Project (hosted on Google Cloud Platform)"

In this article, we containerize (Docker) and deploy a Java Maven application to Kubernetes (Kubernetes Engine on Google Cloud Platform)

A sample project is hosted here on Google Cloud Platform Kubernetes Engine.

Code Usage

The code is hosted here.

Using the code is pretty much similar to what we described earlier in this article. We will set up an env_vars/application.properties file for our new project. Then we set up the multibranch pipeline in Jenkins as mentioned in the previous article.

Understanding the Code

env_vars

In order to customize this pipeline for our project, we updated the env_vars/application.properties file according to our project.

Parameter Function Example
APP_NAME The application name - this will be used to create image name in Jenkins file kubejencdp
IMAGE_NAME This is the image name we want our project to be published at docker registry. kubejencdp-mvn
PROJECT_NAME Name of the project amitthk

DOCKER_REGISTRY_URL

URL of the Docker registry. e.g. we are using Docker hub here registry.hub.docker.com

RELEASE_TAG

Release tag for Docker image. This can be taken from release branch name as well. 1.0.0

DOCKER_PROJECT_NAMESPACE

Docker project namespace. e.g. my account on Docker Hub is amitthk which is also my default namespace

JENKINS_DOCKER_CREDENTIALS_ID

This is the username password credential which will be added to Jenkins for login to Docker registry. (If you are using Openshift, you may want to login with  $(oc whoami -t)  for token

JENKINS_DOCKER_CREDENTIALS_ID

JENKINS_GCLOUD_CRED_ID

This is the Google Cloud Platform service account key which is added to Jenkins as a file credential. For more information please refer here.

JENKINS_GCLOUD_CRED_ID

JENKINS_GCLOUD_CRED_LOCATION

Unused. (If you prefer to not add file credential to Jenkins and to store the service account key at Jenkins and directly access from slave then use this) /var/lib/jenkins/lateral-ceiling-220011-5c9f0bd7782f.json

GCLOUD_PROJECT_ID

This is the Google Cloud Project ID lateral-ceiling-220011

GCLOUD_K8S_CLUSTER_NAME

This is our cluster name on Google Cloud

pyfln-k8s-cluster-dev


Dockerfile

Let us begin by understanding the Dockerfile we used to containerize our app. Here is a brief description of what this Dockerfile is doing:

  • We build the image on top of maven:3.3.9-jdk-8 image
  • We take in some of the overridable arguments and set environment parameters from them
  • We create the $APP_HOME_DIR   directory and a user appuser
  • We copy the source into the context in $APP_BUILD_DIR   directory,
  • In APP_BUILD_DIR   we build the code and move the built JAR to $APP_HOME_DIR   directory
  • We set the permission and run the application with appuser, the user we created earlier
  • Entrypoint is an overridable simple pass-through script which calls the run command.
  • The run script is created during the build with appropriate parameters like SPRING_PROFILES_ACTIVE, API_FULL_NAME, etc.

FROM maven:3.3.9-jdk-8

ARG RELEASE_VERSION=1.0.0-SNAPSHOT
ARG API_NAME=blogpost-api
ARG API_BUILD_DIR=/opt/usr/src
ARG APP_HOME_DIR=/var/www/app
ARG SPRING_PROFILES_ACTIVE=dev
ENV RELEASE_VERSION ${RELEASE_VERSION}
ENV API_FULL_NAME ${API_NAME}-${RELEASE_VERSION}
ENV API_BUILD_DIR ${API_BUILD_DIR}
ENV APP_HOME_DIR ${APP_HOME_DIR}
ENV SPRING_PROFILES_ACTIVE ${SPRING_PROFILES_ACTIVE}

EXPOSE 8080 8081

USER root

RUN mkdir -p ${APP_HOME_DIR} \
    && groupadd -g 10000 appuser \
    && useradd --home-dir ${APP_HOME_DIR} -u 10000 -g appuser appuser

COPY . ${API_BUILD_DIR}

RUN cd ${API_BUILD_DIR}/ \
    && mvn clean package -Pjar -Dapi_name=${API_NAME} -Drelease_version=${RELEASE_VERSION} \
    && cp ${API_BUILD_DIR}/target/${API_FULL_NAME}.jar ${APP_HOME_DIR}/ \
    && cp ${API_BUILD_DIR}/files/entrypoint ${APP_HOME_DIR}/ \
    && echo "java -jar -Dspring.profiles.active=${SPRING_PROFILES_ACTIVE} ${APP_HOME_DIR}/${API_FULL_NAME}.jar" > ${APP_HOME_DIR}/run \
    && chmod -R 0766 ${APP_HOME_DIR} \
    && chown -R appuser:appuser ${APP_HOME_DIR} \
    && chmod g+w /etc/passwd

WORKDIR ${APP_HOME_DIR}

USER appuser

ENTRYPOINT [ "./entrypoint" ]
CMD ["./run"]


Kubernetes Deployment, Service and Ingress

Our Deployment, Service and Route files are pretty much the same as in starter article. We only updated the parameters we pass to these templates. Let us take a look at code of these files:

Deployment

In our deployment, we create the deployment with name  __APP_NAME__-dc . Variable __APP_NAME__   is replaced with our parameter  kubejencdp-py  by our template processing script. We are also loading the secret named api-tls-secret into a volume. Currently, this is for demonstration purpose only, we are not making use of this to keep the things simple. However, the code can be used to understand the usage of secrets. For more information about Kubernetes secrets please follow this link.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: __APP_NAME__-dc
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: __APP_NAME__
        updateTimestamp: "__TIMESTAMP__"
    spec:
      containers:
      - name: __APP_NAME__-ctr
        image: >-
          __IMAGE__
        env:
          - name: DEPLOY_TIMESTAMP
            value: "__TIMESTAMP__"
        volumeMounts:
        - name: tls-secrets
          readOnly: true
          mountPath: "/var/www/app/tls"
        ports:
        - containerPort: 8080
          name: http
        imagePullPolicy: Always
      volumes:
      - name: tls-secrets
        secret:
          secretName: api-tls-secret


Service

Our service is pretty straightforward it exposes port 8000 which is the port of our pod deployed with deployment above.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: __APP_NAME__
  name: __APP_NAME__-svc
spec:
  ports:
  - port: 8080
    targetPort: 8080
    name: http
  selector:
    app: __APP_NAME__
  sessionAffinity: None
  type: NodePort


Ingress

Our ingress exposes the http-port of our service outside the cluster. Our ingress utilizes the secret mentioned below to import two values. The values in our secret is are a self-signed secret and we are not using it as of now to keep the things simple. However, the code can be used to understand the use of secrets:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: __APP_NAME__-ingress
  annotations:
    kubernetes.io/ingress.allow-http: "true"
spec:
  tls:
  - secretName: api-tls-secret
  backend:
    serviceName: __APP_NAME__-svc
    servicePort: http


Secret

Here are the contents of the secret file:

apiVersion: v1
data:
  tls.crt: "<<---base64 encoded ssl cert--->>"
  tls.key: "<<---base64 encoded cert key--->>"
kind: Secret
metadata:
  name: api-tls-secret
  namespace: default
type: Opaque


Jenkins Pipeline

Initialization

In our initialization stage, we basically take most of the parameters from the env_vars/application.properties files as described above. The timestamp is taken from the wrapper script below:

def getTimeStamp(){
    return sh (script: "date +'%Y%m%d%H%M%S%N' | sed 's/[0-9][0-9][0-9][0-9][0-9][0-9]\$//g'", returnStdout: true);
}


And the following function reads the values from env_vars/application.properties file:

def getEnvVar(String paramName){
    return sh (script: "grep '${paramName}' env_vars/project.properties|cut -d'=' -f2", returnStdout: true).trim();
}


Here's our initialization stage:

    stage('Init'){
        steps{
            //checkout scm;
        script{
        env.BASE_DIR = pwd()
        env.CURRENT_BRANCH = env.BRANCH_NAME
        env.IMAGE_TAG = getImageTag(env.CURRENT_BRANCH)
        env.TIMESTAMP = getTimeStamp();
        env.APP_NAME= getEnvVar('APP_NAME')
        env.IMAGE_NAME = getEnvVar('IMAGE_NAME')
        env.PROJECT_NAME=getEnvVar('PROJECT_NAME')
        env.DOCKER_REGISTRY_URL=getEnvVar('DOCKER_REGISTRY_URL')
        env.RELEASE_TAG = getEnvVar('RELEASE_TAG')
        env.DOCKER_PROJECT_NAMESPACE = getEnvVar('DOCKER_PROJECT_NAMESPACE')
        env.DOCKER_IMAGE_TAG= "${DOCKER_REGISTRY_URL}/${DOCKER_PROJECT_NAMESPACE}/${APP_NAME}:${RELEASE_TAG}"
        env.JENKINS_DOCKER_CREDENTIALS_ID = getEnvVar('JENKINS_DOCKER_CREDENTIALS_ID')        
        env.JENKINS_GCLOUD_CRED_ID = getEnvVar('JENKINS_GCLOUD_CRED_ID')
        env.GCLOUD_PROJECT_ID = getEnvVar('GCLOUD_PROJECT_ID')
        env.GCLOUD_K8S_CLUSTER_NAME = getEnvVar('GCLOUD_K8S_CLUSTER_NAME')
        env.JENKINS_GCLOUD_CRED_LOCATION = getEnvVar('JENKINS_GCLOUD_CRED_LOCATION')

        }

        }
    }


Cleanup

Our cleanup script simply clears our any dangling or stale images.

    stage('Cleanup'){
        steps{
            sh '''
            docker rmi $(docker images -f 'dangling=true' -q) || true
            docker rmi $(docker images | sed 1,2d | awk '{print $3}') || true
            '''
        }

    }


Build

Here we docker build   our project. Please notice that since we will be pushing our image to Dockerhub, the tag we are using contains DOCKER_REGISTRY_URL   which is registry.hub.docker.com and my DOCKER_PROJECT_NAMESPACE   is amitthk. You may want to update these values according to your Docker registry.

    stage('Build'){
        steps{
            withEnv(["APP_NAME=${APP_NAME}", "PROJECT_NAME=${PROJECT_NAME}"]){
                sh '''
                docker build -t ${DOCKER_REGISTRY_URL}/${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG} --build-arg APP_NAME=${IMAGE_NAME}  -f app/Dockerfile app/.
                '''
            }   
        }
    }


Publish

In order to publish our image to the Docker registry, we make use of Jenkins credentials defined with variable JENKINS_DOCKER_CREDENTIALS_ID. To understand how this is set up, please refer to first article.

    stage('Publish'){
        steps{
            withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: "${JENKINS_DOCKER_CREDENTIALS_ID}", usernameVariable: 'DOCKER_USERNAME', passwordVariable: 'DOCKER_PASSWD']])
            {
            sh '''
            echo $DOCKER_PASSWD | docker login --username ${DOCKER_USERNAME} --password-stdin ${DOCKER_REGISTRY_URL} 
            docker push ${DOCKER_REGISTRY_URL}/${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG}
            docker logout
            '''
            }
        }
    }


Deploy

In our Deploy stage, we make use of Jenkins secret file credential set up in the JENKINS_GCLOUD_CRED_ID   variable. Again, to check how this variable is set up, please refer to first article.

For deployment, we process our deployment, service, and ingress files mentioned above using our simple script named process_files.sh. This script simply replaces some of the build/deployment variables like  __APP_NAME__ , __TIMESTAMP__  , __IMAGE__   etc., we want to update our deployment/service/ingress with:

#!/bin/bash
# The MIT License
# SPDX short identifier: MIT
# Further resources on the MIT License
# Copyright 2018 Amit Thakur - amitthk - <e.amitthakur@gmail.com>
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

if (($# <5))
  then
    echo "Usage : $0 <DOCKER_PROJECT_NAME> <APP_NAME> <IMAGE_TAG> <directory containing k8s files> <timestamp>"
    exit 1
fi

PROJECT_NAME=$1
APP_NAME=$2
IMAGE=$3
WORK_DIR=$4
TIMESTAMP=$5

main(){
find $WORK_DIR -name *.yml -type f -exec sed -i.bak1 's#__PROJECT_NAME__#'$PROJECT_NAME'#' {} \;
find $WORK_DIR -name *.yml -type f -exec sed -i.bak2 's#__APP_NAME__#'$APP_NAME'#' {} \;
find $WORK_DIR -name *.yml -type f -exec sed -i.bak3  's#__IMAGE__#'$IMAGE'#' {} \;
find $WORK_DIR -name *.yml -type f -exec sed -i.bak3  's#__TIMESTAMP__#'$TIMESTAMP'#' {} \;
}
main


And here is our Deployment stage. We activate our gcloud credential, we process our templates using the process_files.sh script mentioned above, then we use kubectl to apply our processed templates. We watch our rollout using  kubectl rollout status  command:

    stage('Deploy'){
        steps{
        withCredentials([file(credentialsId: "${JENKINS_GCLOUD_CRED_ID}", variable: 'JENKINSGCLOUDCREDENTIAL')])
        {
        sh """
            gcloud auth activate-service-account --key-file=${JENKINSGCLOUDCREDENTIAL}
            gcloud config set compute/zone asia-southeast1-a
            gcloud config set compute/region asia-southeast1
            gcloud config set project ${GCLOUD_PROJECT_ID}
            gcloud container clusters get-credentials ${GCLOUD_K8S_CLUSTER_NAME}

            chmod +x $BASE_DIR/k8s/process_files.sh

            cd $BASE_DIR/k8s/
            ./process_files.sh "$GCLOUD_PROJECT_ID" "${IMAGE_NAME}" "${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG}" "./${IMAGE_NAME}/" ${TIMESTAMP}

            cd $BASE_DIR/k8s/${IMAGE_NAME}/.
            kubectl apply --force=true --all=true --record=true -f $BASE_DIR/k8s/$IMAGE_NAME/
            kubectl rollout status --watch=true --v=8 -f $BASE_DIR/k8s/$IMAGE_NAME/$IMAGE_NAME-deployment.yml

            gcloud auth revoke --all
            """
        }
        }
    }


Conclusion

We completed the containerization, build and deployment of a simple Java Maven application to Kubernetes. The Kubernetes engine used is from Google Cloud Platform.

References

  • First article in this series: Setting Up CI/CD Pipelines for Docker Kubernetes Project With Google Cloud Platform
  • Google Cloud Platform documentation
  • Google Kubernetes engine quickstart
  • Kubernetes documentation


Docker (software) Continuous Integration/Deployment Kubernetes Pipeline (software) Google (verb) Cloud Jenkins (software)

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • DevOps for Developers: Continuous Integration, GitHub Actions, and Sonar Cloud
  • Introduction to Spring Cloud Kubernetes
  • REST vs. Messaging for Microservices
  • Solving the Kubernetes Security Puzzle

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: