DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Optimizing CI/CD Pipeline With Kubernetes, Jenkins, Docker, and Feature Flags
  • Implementing CI/CD Pipelines With Jenkins and Docker
  • Java CI/CD: From Local Build to Jenkins Continuous Integration
  • Mastering Shift-Left: The Ultimate Guide to Input Validation in Jenkins Pipelines

Trending

  • Implementing API Design First in .NET for Efficient Development, Testing, and CI/CD
  • AI Speaks for the World... But Whose Humanity Does It Learn From?
  • Orchestrating Microservices with Dapr: A Unified Approach
  • Enhancing Business Decision-Making Through Advanced Data Visualization Techniques
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. DevOps and CI/CD
  4. How To Use Docker Volume To Build Angular Application With Jenkins Pipeline

How To Use Docker Volume To Build Angular Application With Jenkins Pipeline

In this article, explore how to use Docker Volume to build an Angular application using a Jenkins pipeline, with examples.

By 
Unmesh Vinchurkar user avatar
Unmesh Vinchurkar
·
Aug. 11, 23 · Code Snippet
Likes (2)
Comment
Save
Tweet
Share
7.7K Views

Join the DZone community and get the full member experience.

Join For Free

In the realm of Docker, a volume refers to a durable storage destination that exists independently of the container. Volumes prove invaluable for preserving data that should endure beyond the container's lifecycle, even if the container is halted or deleted. The volume will be created when the container is built, and it can be accessed and modified by processes running inside the container.  

Utilizing Volumes in a Docker Container Offers Several Compelling Advantages:

  1. Data Persistence: When you have critical data that must endure beyond a container's lifecycle, volumes provide the ideal solution. Storing items like database files or application logs in a volume ensures their preservation even if the container is halted or deleted.
  2. Sharing Data Among Containers: Volumes facilitate seamless data sharing among multiple containers. By leveraging a volume, you allow different containers to access the same data, making it convenient for storing shared configuration files or data utilized by multiple containers.
  3. Streamlining Data Management: Volumes contribute to efficient data management by decoupling data from the container itself. For instance, you can employ a volume to store data generated by the container and then easily access that data by mounting the volume on a host system, simplifying data handling and manipulation.

In Jenkins pipelines, Docker volumes offer a convenient solution for building Angular projects without requiring the installation of the Angular library on the Jenkins node machine.

The concept behind this approach involves the following steps:

  1. Copy Angular application source code into a Docker volume.
  2. Utilize a Docker image pre-installed with Node.js to create a new Docker container.
  3. Attach the Docker volume containing the source code to this new container.

Once the Docker container is set up, it will efficiently build and compile the Angular application. At the completion of the job, the container will publish the compiled Angular application as a zip file. This process enables seamless development and deployment without cluttering the Jenkins node machine with Angular dependencies.

The Following Code Implements the Ideas Mentioned Above

Groovy
 
pipeline {

   agent any { label 'AngularApp' } 

   options { 

    timestamps()

  }  

  environment {

    SRC_PATH = '/home/node/angular-realworld-example-app'     

  }

   

  stages {

    stage('Preparation') {

      agent { label 'AngularApp' } 

      steps {     

        script {        

            // Pull angular app code and clean the workspace

           // Create volumes to be used across all stages.           

          sh( 

            label: 'Pre-create all the docker volumes',

            script: '''                      

            rm -rf ./*            

             git clone https://github.com/gothinkster/angular-realworld-example-app.git            

             ls -l           

            docker volume create --name=src-volume

            docker volume create --name=node_modules-volume

            docker volume create --name=dist-prod-volume

            docker container prune -f || true          

            '''

          )          

          /* 

          Copy the source code in the jobs workspce to docker volume

           '/ws' in the docker container maps to the current workspace ${WORKSPACE}

           */

          withDockerContainer(

            image: 'alpine',

            args: '-u 0 -v src-volume:/tmp/src -v ${WORKSPACE}:/ws'

          ) { 

            sh( 

              label: 'Copy source files into src-volume',

              script: '''              

                # Going to source code directory              

                cd  /ws/angular-realworld-example-app                

                # copying code to docker volume

                  cp -arf . /tmp/src                

                # Printing the contents on console

                ls -l        

                 '''

            )

          }

        }

      }

    }

    stage('Install Dependencies') {

      agent { 

        docker {

          image 'node:18-alpine'

          reuseNode true

          args '-u 0 \

            -v src-volume:${SRC_PATH} \

            -v node_modules-volume:${SRC_PATH}/node_modules'

        }

      }      

      steps {      

        /*

         Installing dependencies defined in package.json

         by calling    yarn install. Here the docker volume "src-volume" is mapped to path ${SRC_PATH}

         inside the container.

        */      

        sh( 

          label: 'Install dependencies',

          script: '''

          cd ${SRC_PATH}

          npm config set registry "http://registry.npmjs.org/"         

           yarn install

          '''

          )

         }

       }

            stage('Build') {

              agent {

                /* use the root user (-u 0) to access the docker container

                 this is needed to write to volumes.

                   dist-prod-volume is used to expose build artifacts outside of this stage

                 node_modules-volume is to reuse dependencies from cache                 

                  "reuseNode true" is necessary to run the stage on the same node used at the begning of the                     script  as docker volumes cannot work across nodes.

                 */

                docker {

                  image 'node:18-alpine'

                  reuseNode true

                  args '-u 0 \

                    -v src-volume:${SRC_PATH} \

                    -v dist-prod-volume:${SRC_PATH}/dist/ \

                    -v node_modules-volume:${SRC_PATH}/node_modules'

                }

              }              

              steps {                                        

                sh( 

                  label: 'Clean and rebuild application distribution files',

                  script: '''

                  cd ${SRC_PATH}                  

                  # clean old files under dist directory

                  yarn rimraf dist/*

                  export NODE_OPTIONS=--openssl-legacy-provider                  

                  # Build the angular application using yarn

                  yarn build

                  '''

                )

              }

              post {

                success {

                  // Jenkins job cannot access the files from the docker container/volumes

                  // directly, hence we need to first copy them into the workspace before

                  // archiving the artifacts.

                  sh( 

                    label: 'Copy artifacts from docker container to workspace',

                    script: '''

                    mkdir -p  $WORKSPACE/artifacts/AngularApp

                    cp -ar ${SRC_PATH}/dist/.  $WORKSPACE/artifacts/AngularApp

                    '''

                  )

                  zip archive: true, dir: 'artifacts/AngularApp', zipFile: 'AngularApp.zip', overwrite: true

                  archiveArtifacts artifacts: 'AngularApp.zip'

                }

                cleanup {

                  // wipe out this workspace after the job is completed (successful or not)

                  cleanWs()

                }

              }

            }

  }

  post {

    cleanup {

        cleanWs()

      // clean docker stuff before finishing the job

      sh( 

        label: 'Clean dangling docker artifacts',

        script: '''

        docker volume rm src-volume || true

        docker volume rm node_modules-volume || true

         docker volume rm dist-prod-volume || true

        docker container prune -f

        docker image prune -f

        '''

      )

    }

  }

}


Docker (software) Jenkins (software) Pipeline (software)

Opinions expressed by DZone contributors are their own.

Related

  • Optimizing CI/CD Pipeline With Kubernetes, Jenkins, Docker, and Feature Flags
  • Implementing CI/CD Pipelines With Jenkins and Docker
  • Java CI/CD: From Local Build to Jenkins Continuous Integration
  • Mastering Shift-Left: The Ultimate Guide to Input Validation in Jenkins Pipelines

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!