DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • 7 Ways of Containerizing Your Node.js Application
  • Auto-Scaling a Spring Boot Native App With Nomad
  • Manage Microservices With Docker Compose
  • Request Routing Through Service Mesh for WebSphere Liberty Profile Container on Kubernetes

Trending

  • Segmentation Violation and How Rust Helps Overcome It
  • Building Scalable and Resilient Data Pipelines With Apache Airflow
  • Enhancing Avro With Semantic Metadata Using Logical Types
  • Understanding Java Signals
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Deploying a Node.js/Angular 5 Application to Kubernetes With Docker

Deploying a Node.js/Angular 5 Application to Kubernetes With Docker

In this post, we mesh cloud, web dev, and container technologies together in a DevOps way. It's pretty much everything you could ask for.

By 
Amit Thakur user avatar
Amit Thakur
·
Jan. 07, 19 · Tutorial
Likes (4)
Comment
Save
Tweet
Share
41.2K Views

Join the DZone community and get the full member experience.

Join For Free

1. Introduction

This article is in continuation of my previous article, "Setting up CI CD Pipelines for Docker Kubernetes project (hosted on Google Cloud Platform)."

In this article, we containerize (Docker) and deploy a Node.js/Angular 5 application to Kubernetes (Kubernetes Engine on Google Cloud Platform).

The sample project is hosted here on Google Cloud Platform Kubernetes Engine:

  • http://35.244.228.238/index.html

2. Code Usage

The code is hosted here:

  • https://github.com/amitthk/kubejencdp-npm.git

Using the code is pretty much similar to what we described earlier in the previous article. We will set up a env_vars/application.properties file for our new project. Then we set up the multibranch pipeline in Jenkins as mentioned in the previous article.

3. Understanding the Code

Setting Up env_vars:

In order to customize this pipeline for our project, we updated the env_vars/application.properties file according to our project. For more information on these values, please refer to the first article in this series here.

Parameter Function Example
APP_NAME The application name - this will be used to create image name in the Jenkins file. kubejencdp
IMAGE_NAME This is the image name we want our project to be published under on the Docker registry. kubejencdp-npm
PROJECT_NAME Name of the project. amitthk

DOCKER_REGISTRY_URL

URL of the Docker registry. e.g. we are using Docker hub here. registry.hub.docker.com

RELEASE_TAG

Release tag for the Docker image. This can be taken from the release branch name as well. 1.0.0

DOCKER_PROJECT_NAMESPACE

Docker project namespace. My account on Docker Hub is amitthk, which is also my default namespace.

JENKINS_DOCKER_CREDENTIALS_ID

This is the username password credential which will be added to Jenkins for login to the Docker registry (if you are using Openshift, you may want to login with $(oc whoami -t) for tokens).

JENKINS_DOCKER_CREDENTIALS_ID

JENKINS_GCLOUD_CRED_ID

This is the Google Cloud Platform service account key which is added to Jenkins as a file credential. For more information please refer here.

JENKINS_GCLOUD_CRED_ID

JENKINS_GCLOUD_CRED_LOCATION

Unused (if you prefer to not add file credential to Jenkins and to store the service account key at Jenkins and directly access from slave then use this). /var/lib/jenkins/lateral-ceiling-220011-5c9f0bd7782f.json

GCLOUD_PROJECT_ID

This is the Google Cloud Project ID. lateral-ceiling-220011

GCLOUD_K8S_CLUSTER_NAME

This is our cluster name on Google Cloud.

pyfln-k8s-cluster-dev


3.1 Dockerfile

Let us begin by understanding the Dockerfile we used to containerize our app. Here is a brief description of what this Dockerfile is doing:

  • We build the image on top of the centos/nodejs-8-centos7 image.
  • We take in some of the overridable arguments and set environment parameters from them.
  • We copy the source into the context in the $APP_BUILD_DIR directory.
  • In APP_BUILD_DIR, we build the code with npm install/npm run, ng build, etc., commands and move the built distribution to the $APP_HOME_DIR directory.
  • We also copy over the configurations for Apache HTTPD to the respective configuration directories /etc/httpd/conf, /etc/httpd/conf.d/, and set the permissions.
  • Since we host this application using Apache HTTPD, we set the permissions and run the application with Apache user.
  • The entrypoint is a overridable simple pass through file which calls the default command.
  • The default command basically runs HTTPD with the httpd.conf configuration files we copied earlier to the /etc/httpd/conf directory.
FROM centos/nodejs-8-centos7

ARG APP_NAME=pyfln-ui
ARG APP_BASE_DIR=/var/www/html
ARG APP_BUILD_DIR=/opt/app-root/src/
ARG API_ENDPOINT=http://127.0.0.1:8000
ARG APACHE_LOG_DIR=/var/log/httpd
ENV APP_BUILD_DIR $APP_BUILD_DIR
ENV APP_BASE_DIR $APP_BASE_DIR
ENV APP_NAME ${APP_NAME}
ENV API_ENDPOINT ${API_ENDPOINT}
ENV APACHE_LOG_DIR ${APACHE_LOG_DIR}
ENV LD_LIBRARY_PATH /opt/rh/rh-nodejs8/root/usr/lib64
ENV PATH /opt/rh/rh-nodejs8/root/usr/bin:/opt/app-root/src/node_modules/.bin/:/opt/app-root/src/.npm-global/bin/:/opt/app-root/src/bin:/opt/app-root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV NPM_CONFIG_PREFIX /opt/app-root/src/.npm-global

EXPOSE 8080

USER root

COPY files ${APP_BUILD_DIR}/files


#RUN cp ${APP_BUILD_DIR}/files/pyfln.rep /etc/yum.repos.d/ \
#    && update-ca-trust force-enable

RUN yum install -y httpd httpd-tools

RUN cp ${APP_BUILD_DIR}/files/npm/npmrc ~/.npmrc \
    && cp ${APP_BUILD_DIR}/files/httpd/httpd.conf /etc/httpd/conf/ \
    && cp ${APP_BUILD_DIR}/files/httpd/default-site.conf /etc/httpd/conf.d/default-site.conf \
    && chown apache:apache /etc/httpd/conf/httpd.conf \
    && chmod 755 /etc/httpd/conf/httpd.conf \
    && chown -R apache:apache /etc/httpd/conf.d \
    && chmod -R 755 /etc/httpd/conf.d \
    && touch /etc/httpd/logs/error_log /etc/httpd/logs/access_log \
    && chmod -R 766 /etc/httpd/logs \
    && chown -R apache:apache /etc/httpd/logs \
    && touch ${APACHE_LOG_DIR}/error.log ${APACHE_LOG_DIR}/access_log \
    && chown -R apache:apache ${APACHE_LOG_DIR} \
    && chmod -R g+rwX ${APACHE_LOG_DIR} \
    && chown -R apache:apache /var/run/httpd \
    && chmod -R g+rwX ${APACHE_LOG_DIR}

COPY . ${APP_BUILD_DIR}

RUN npm --max_old_space_size=8000 --registry https://registry.npmjs.org/ install -g npm@6.4.1 --loglevel=verbose \
    && npm --max_old_space_size=8000 --registry https://registry.npmjs.org/ install -g @angular/cli@1.6.8 --loglevel=verbose

RUN cd ${APP_BUILD_DIR} \
    && npm --max_old_space_size=8000 --registry https://registry.npmjs.org/ install --no-optional --loglevel=verbose \
    && npm --max_old_space_size=8000 --registry https://registry.npmjs.org/ run ng build --prod --env=prod --aot --verbose --show-circular-dependencies  \
    && mkdir -p ${APP_BASE_DIR} \
    && cp -r ${APP_BUILD_DIR}/dist/. ${APP_BASE_DIR}/ \
    && cp ${APP_BUILD_DIR}/files/entrypoint.sh ${APP_BASE_DIR}/ \
    && chmod -R 0755 $APP_BASE_DIR/ \
    && chown -R apache:apache $APP_BASE_DIR/

WORKDIR $APP_BASE_DIR
USER apache
ENTRYPOINT ["./entrypoint.sh"]
CMD ["/usr/sbin/httpd","-f","/etc/httpd/conf/httpd.conf","-D","FOREGROUND"]

3.2. Kubernetes Deployment, Service, and Ingress

Our Deployment, Service, and Route files are pretty much the same as in the starter article. We only updated the parameters we pass to these templates. Let us take a look at the code for these files:

1. Deployment

In our deployment, we create a  deployment with the name __APP_NAME__-dc. The variable __APP_NAME__ is replaced with our parameter kubejencdp-py by our template processing script.

We deploy one replica with the container image kubejencdp-py (the __IMAGE__ variable will be updated by the image name by the template processing script). We are passing the __TIMESTAMP__ variable which is updated by our pipeline with a timestamp of the deployment. This ensures that we pull the latest image even if we apply the same deployment. You can find more information about this trick in this discussion on GitHub. We expose the port 8000 as exposed by the container.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: __APP_NAME__-dc
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: __APP_NAME__
        updateTimestamp: "__TIMESTAMP__"
    spec:
      containers:
      - name: __APP_NAME__-ctr
        image: >-
          __IMAGE__
        ports:
        - name: http-port
          containerPort: 8080
        env:
          - name: API_ENDPOINT
            value: "http://__APP_NAME__-api:8080/"
          - name: DEPLOY_TIMESTAMP
            value: "__TIMESTAMP__"
        imagePullPolicy: Always

2. Service

Our service is pretty straightforward. It exposes port 8000 which is the port of our pod deployed with the deployment above.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: __APP_NAME__
  name: __APP_NAME__-svc
spec:
  ports:
    - name: http-port
      port: 8080
      protocol: TCP
      targetPort: 8080
  selector:
    app: __APP_NAME__
  sessionAffinity: None
  type: NodePort

3. Ingress

Our ingress exposes the http-port of our service outside the cluster:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    app: __APP_NAME__
  name: __APP_NAME__-ingress
spec:
  backend:
    serviceName: __APP_NAME__-svc
    servicePort: http-port

3.3 Jenkins Pipeline

1. Initialization

In our initalization stage, we basically take most of the parameters from the env_vars/application.properties files as described above. The timestamp is taken from the wrapper script below:

def getTimeStamp(){
    return sh (script: "date +'%Y%m%d%H%M%S%N' | sed 's/[0-9][0-9][0-9][0-9][0-9][0-9]\$//g'", returnStdout: true);
}

And the following function reads the values from the env_vars/application.properties file:

def getEnvVar(String paramName){
    return sh (script: "grep '${paramName}' env_vars/project.properties|cut -d'=' -f2", returnStdout: true).trim();
}

Here's our initialization stage:

    stage('Init'){
        steps{
            //checkout scm;
        script{
        env.BASE_DIR = pwd()
        env.CURRENT_BRANCH = env.BRANCH_NAME
        env.IMAGE_TAG = getImageTag(env.CURRENT_BRANCH)
        env.TIMESTAMP = getTimeStamp();
        env.APP_NAME= getEnvVar('APP_NAME')
        env.IMAGE_NAME = getEnvVar('IMAGE_NAME')
..
...
        env.GCLOUD_K8S_CLUSTER_NAME = getEnvVar('GCLOUD_K8S_CLUSTER_NAME')
        env.JENKINS_GCLOUD_CRED_LOCATION = getEnvVar('JENKINS_GCLOUD_CRED_LOCATION')

        }

        }
    }

2. Cleanup

Our cleanup script simply clears out our any dangling or stale images.

    stage('Cleanup'){
        steps{
            sh '''
            docker rmi $(docker images -f 'dangling=true' -q) || true
            docker rmi $(docker images | sed 1,2d | awk '{print $3}') || true
            '''
        }

    }

3. Build

Here we build our Docker project. Please notice that since we will be pushing our image to Docker Hub, the tag we are using contains DOCKER_REGISTRY_URL which is registry.hub.docker.com and my DOCKER_PROJECT_NAMESPACE is amitthk. You may want to update these values according to your docker registry.

    stage('Build'){
        steps{
            withEnv(["APP_NAME=${APP_NAME}", "PROJECT_NAME=${PROJECT_NAME}"]){
                sh '''
                docker build -t ${DOCKER_REGISTRY_URL}/${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG} --build-arg APP_NAME=${IMAGE_NAME}  -f app/Dockerfile app/.
                '''
            }   
        }
    }

4. Publish

In order to publish our image to the Docker registry, we make use of Jenkins's credentials defined with variable JENKINS_DOCKER_CREDENTIALS_ID. To understand how this is set up, please refer to first article.

    stage('Publish'){
        steps{
            withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: "${JENKINS_DOCKER_CREDENTIALS_ID}", usernameVariable: 'DOCKER_USERNAME', passwordVariable: 'DOCKER_PASSWD']])
            {
            sh '''
            echo $DOCKER_PASSWD | docker login --username ${DOCKER_USERNAME} --password-stdin ${DOCKER_REGISTRY_URL} 
            docker push ${DOCKER_REGISTRY_URL}/${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG}
            docker logout
            '''
            }
        }
    }

5. Deploy

In our Deploy stage, we make use of Jenkins's secret file credential set up in the JENKINS_GCLOUD_CRED_ID variable. Again, to check how this variable is set up, please refer to first article.

For deployment, we process our deployment, service, and ingress files mentioned above using our simple script named process_files.sh. This script simply replaces some of the build/deployment variables like __APP_NAME__, __TIMESTAMP__, __IMAGE__ , etc. that we want to update our deployment/service/ingress with:

if (($# <5))
  then
    echo "Usage : $0 <DOCKER_PROJECT_NAME> <APP_NAME> <IMAGE_TAG> <directory containing k8s files> <timestamp>"
    exit 1
fi

PROJECT_NAME=$1
APP_NAME=$2
IMAGE=$3
WORK_DIR=$4
TIMESTAMP=$5

main(){
find $WORK_DIR -name *.yml -type f -exec sed -i.bak1 's#__PROJECT_NAME__#'$PROJECT_NAME'#' {} \;
find $WORK_DIR -name *.yml -type f -exec sed -i.bak2 's#__APP_NAME__#'$APP_NAME'#' {} \;
find $WORK_DIR -name *.yml -type f -exec sed -i.bak3  's#__IMAGE__#'$IMAGE'#' {} \;
find $WORK_DIR -name *.yml -type f -exec sed -i.bak3  's#__TIMESTAMP__#'$TIMESTAMP'#' {} \;
}
main

And here is our Deployment stage. We activate our gcloud credential, we process our templates using the process_files.sh script mentioned above, then we use Kubectl to apply our processed templates. We watch our rollout using the Kubectl rollout status command :

    stage('Deploy'){
        steps{
        withCredentials([file(credentialsId: "${JENKINS_GCLOUD_CRED_ID}", variable: 'JENKINSGCLOUDCREDENTIAL')])
        {
        sh """
            gcloud auth activate-service-account --key-file=${JENKINSGCLOUDCREDENTIAL}
            gcloud config set compute/zone asia-southeast1-a
            gcloud config set compute/region asia-southeast1
            gcloud config set project ${GCLOUD_PROJECT_ID}
            gcloud container clusters get-credentials ${GCLOUD_K8S_CLUSTER_NAME}

            chmod +x $BASE_DIR/k8s/process_files.sh

            cd $BASE_DIR/k8s/
            ./process_files.sh "$GCLOUD_PROJECT_ID" "${IMAGE_NAME}" "${DOCKER_PROJECT_NAMESPACE}/${IMAGE_NAME}:${RELEASE_TAG}" "./${IMAGE_NAME}/" ${TIMESTAMP}

            cd $BASE_DIR/k8s/${IMAGE_NAME}/.
            kubectl apply --force=true --all=true --record=true -f $BASE_DIR/k8s/$IMAGE_NAME/
            kubectl rollout status --watch=true --v=8 -f $BASE_DIR/k8s/$IMAGE_NAME/$IMAGE_NAME-deployment.yml

            gcloud auth revoke --all
            """
        }
        }
    }

4. Conclusion

We completed the containerization, build, and deployment of a simple Node.js/Angular5 application to Kubernetes. The Kubernetes engine used is from Google Cloud Platform.

5. References

  • First article in this series: Setting Up CI/CD Pipelines for Docker Kubernetes Project With Google Cloud Platform
  • Google Cloud Platform documentation
  • Google Kubernetes engine quickstart
  • Kubernetes documentation
Docker (software) Kubernetes application

Opinions expressed by DZone contributors are their own.

Related

  • 7 Ways of Containerizing Your Node.js Application
  • Auto-Scaling a Spring Boot Native App With Nomad
  • Manage Microservices With Docker Compose
  • Request Routing Through Service Mesh for WebSphere Liberty Profile Container on Kubernetes

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!