DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Java CI/CD: From Local Build to Jenkins Continuous Integration
  • Implementing CI/CD Pipelines With Jenkins and Docker
  • Testcontainers: From Zero To Hero [Video]
  • Why Incorporate CI/CD Pipeline in Your SDLC?

Trending

  • Using Python Libraries in Java
  • Bridging UI, DevOps, and AI: A Full-Stack Engineer’s Approach to Resilient Systems
  • What’s Got Me Interested in OpenTelemetry—And Pursuing Certification
  • Proactive Security in Distributed Systems: A Developer’s Approach
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. DevOps and CI/CD
  4. Java CI/CD: From Continuous Integration to Release Management

Java CI/CD: From Continuous Integration to Release Management

This post is part of a series that demonstrates a sample deployment pipeline with Jenkins, Docker, and Octopus.

By 
Matthew Casperson user avatar
Matthew Casperson
·
Oct. 28, 20 · Tutorial
Likes (1)
Comment
Save
Tweet
Share
8.3K Views

Join the DZone community and get the full member experience.

Join For Free

Java CI/CD: From Continuous Integration to release management

This post is part of a series that demonstrates a sample deployment pipeline with Jenkins, Docker, and Octopus:

  • From JAR to Docker
  • From local builds to Continuous Integration
  • From Continuous Integration to Kubernetes deployments
  • From Continuous Integration to release management
  • From release management to operations

DevOps

In the previous blog post we used Octopus to build a Kubernetes cluster in AWS using EKS, and then deployed the Docker image created by Jenkins as a Kubernetes deployment and service.

However, we still don’t have a complete deployment pipeline solution, as Jenkins is not integrated with Octopus, leaving us to manually coordinate builds and deployments.

In this blog post, we’ll extend our Jenkins build to call Octopus and initiate a deployment when our Docker image has been pushed to Docker Hub. We will also create additional environments, and manage the release from a local development environment to the final production environment.

Install the Jenkins Plugins

Octopus provides a plugin for Jenkins that exposes integration steps in both freestyle projects and pipeline scripts. This plugin is installed by navigating to Manage Jenkins ➜ Manage Plugins. From here you can search for "Octopus" and install the plugin.

The Octopus plugin uses the Octopus CLI to integrate with the Octopus Server. We can install the CLI manually on the agent, but for this example, we’ll use the Custom Tools plugin to download the Octopus CLI and push it to the agent:

Install the custom tools plugin Install the custom tools plugin.

Configure the Octopus Server and Tools

We add the Octopus Server, our pipeline will connect with, by navigating to Manage Jenkins ➜ Configure System :

Define the Octopus Server Define the Octopus Server.

We then need to define a custom tool under Manage Jenkins ➜ Global Tool Configuration. The custom tool has the name OctoCLI, and because in my case the agent is running on Windows, the Octopus CLI will be downloaded from https://download.octopusdeploy.com/octopus-tools/7.4.1/OctopusTools.7.4.1.win-x64.zip. For the latest version of the CLI, and binaries supporting other operating systems, see the Octopus download page:

Define the Octopus CLI custom tool Define the Octopus CLI custom tool.

Further down on the Global Tool Configuration page we define the path to the Octopus CLI. The custom tools plugin installs the Octopus CLI to the directory <jenkins home>/tools/com.cloudbees.jenkins.plugins.customtools.CustomTool

/OctoCLI, where <jenkins home> is the home directory of the Jenkins server or the agent performing the build. In my case, the agent home directory is C:\JenkinsAgent, so the Octopus CLI will be available from C:\JenkinsAgent\tools\com.cloudbees.jenkins.plugins.customtools.CustomTool

\OctoCLI\octo. The name of the tool is left as Default:

Define the Octopus CLI path Define the Octopus CLI path.

With these tools configured we can update the pipeline script to initiate a deployment in Octopus after the Docker image has been pushed to Docker Hub.

Update the Jenkins Pipeline

Our existing pipeline was configured to build and push the Docker image to Docker Hub. We will retain those steps, and add additional steps to install the Octopus CLI as a custom tool and then create and deploy a release in Octopus after the Docker image has been pushed. Let’s look at the complete pipeline:

Java
 




x
40


 
1
pipeline {
2
    agent {
3
        label 'docker'
4
    }
5
    parameters {
6
        string(defaultValue: 'Spaces-1', description: '', name: 'SpaceId', trim: true)
7
        string(defaultValue: 'Petclinic', description: '', name: 'ProjectName', trim: true)
8
        string(defaultValue: 'Dev', description: '', name: 'EnvironmentName', trim: true)
9
        string(defaultValue: 'Octopus', description: '', name: 'ServerId', trim: true)
10
    }
11
    stages {
12
        stage ('Add tools') {
13
            steps {
14
                tool('OctoCLI')
15
            }
16
        }
17
        stage('Building our image') {
18
            steps {
19
                script {
20
                    dockerImage = docker.build "mcasperson/petclinic:$BUILD_NUMBER"
21
                }
22
            }
23
        }
24
        stage('Deploy our image') {
25
            steps {
26
                script {
27
                    // Assume the Docker Hub registry by passing an empty string as the first parameter
28
                    docker.withRegistry('' , 'dockerhub') {
29
                        dockerImage.push()
30
                    }
31
                }
32
            }
33
        }
34
        stage('deploy') {
35
            steps {                                
36
                octopusCreateRelease deployThisRelease: true, environment: "${EnvironmentName}", project: "${ProjectName}", releaseVersion: "1.0.${BUILD_NUMBER}", serverId: "${ServerId}", spaceId: "${SpaceId}", toolId: 'Default', waitForDeployment: true                
37
            }
38
        }
39
    }
40
}



This pipeline has some new settings to support integration with Octopus.

We start by defining common parameters. These parameters will be referenced when we create and deploy a release in Octopus, and they provide a nice way to decouple the Octopus details from any specific instance, while also providing sensible default values:

Java
 




xxxxxxxxxx
1


 
1
    parameters {
2
        string(defaultValue: 'Spaces-1', description: '', name: 'SpaceId', trim: true)
3
        string(defaultValue: 'Petclinic', description: '', name: 'ProjectName', trim: true)
4
        string(defaultValue: 'Dev', description: '', name: 'EnvironmentName', trim: true)
5
        string(defaultValue: 'Octopus', description: '', name: 'ServerId', trim: true)
6
    }



In order for the custom tools plugin to extract the Octopus CLI in the agent’s home directory, we need to call tool('OctoCLI'):

Java
 




x


 
1
stage ('Add tools') {
2
     steps {
3
     tool('OctoCLI')
4
    }
5
      }



The final stage makes a call to octopusCreateRelease to both create a release and deploy it to the first environment in Octopus. By default, Octopus will create the deployment with the latest version of the packages referenced in the deployment steps, which means that we will deploy the Docker image that Jenkins uploaded to Docker Hub in the previous stage:

Java
 




xxxxxxxxxx
1


 
1
        stage('deploy') {
2
            steps {                                
3
                octopusCreateRelease deployThisRelease: true, environment: "${EnvironmentName}", project: "${ProjectName}", releaseVersion: "1.0.${BUILD_NUMBER}", serverId: "${ServerId}", spaceId: "${SpaceId}", toolId: 'Default', waitForDeployment: true                
4
            }
5
        }



With these changes to the pipeline, we rerun the project in Jenkins, and from the console logs we can see that Jenkins has successfully triggered a deployment in Octopus:

Jenkins project build logs showing the Octopus deployment output Jenkins project build logs showing the Octopus deployment output.

Here is the corresponding deployment in Octopus:

The Octopus deployment The Octopus deployment.

Continuous Deployment vs Continuous Delivery

Over the years the CD half of the acronym CI/CD has settled on two definitions:

  • Continuous Deployment, which means a completely automatic deployment pipeline where each commit goes to production, assuming all tests and other automated requirements are met.
  • Continuous Delivery, which means each commit could go to production through an automated, but not necessarily automatic, deployment pipeline. The decision to promote through environments (or not) is still made by a human.

While continuous deployment, by its very definition, removes all the friction from a deployment process, there are many valid reasons to implement continuous delivery. For example, you may need to orchestrate deployments with other teams, product owners may need to sign off new features, regulatory requirements may demand that production infrastructure not be modified by developers without some review process, or you may simply want to retain the ability to manually test and verify a release before it goes to production.

If you read blog posts on best practices concerning CI/CD, you may be left with the impression that continuous deployment is something that you must strive to implement. While the practices that allow for a true continuous deployment pipeline will have value, most of the development teams we talk to report that continuous delivery works for them.

For this blog, we will create a continuous delivery pipeline, which manages releases to multiple environments through the Octopus dashboard.

Add the Environments

We only have one environment in Octopus called Dev. However, a typical workflow will promote a deployment through multiple environments on the way to production. To implement this, we need to create more environments in Octopus which we will call Test and Prod:

Add the Test and Prod environments Add the Test and Prod environments.

We need to ensure our Kubernetes target is placed within these new environments as well:

Add the Kubernetes target to the new environments Add the Kubernetes target to the new environments.

We now have the ability to promote releases from the Dev environment to the Test environment through the Octopus dashboard:

The Octopus dashboard showing the next environment to deploy to The Octopus dashboard showing the next environment to deploy to.

Promoting the release to the Test environment, we can see our Kubernetes resources being created in the petclinic-test namespace. If you recall from the previous blog post, we configured our Kubernetes steps to deploy to a namespace called petclinic-#{Octopus.Environment.Name | ToLower}, which is why deployments to a new environment have been placed in a new namespace:

A deployment to the Test environment A deployment to the Test environment.

To prove this, we can rerun the runbook Get Service in the Test environment. We can see that a new load balancer host name has been created for the new service resource:

The details of the load balancer service created in the Test environment The details of the load balancer service created in the Test environment.

And with that, we have a complete deployment pipeline.

Conclusion

In this post, we triggered a deployment in Octopus after Jenkins finished building and pushing the Docker image. This means we have implemented continuous integration with Jenkins testing, building, and publishing the Docker image, and continuous delivery with Octopus providing automatic deployment to a development environment, with an automated process ready to be manually triggered in other environments.

We now have the ability to promote a change from the application source code to production with a few simple button clicks. Those responsible for release management need no special tools other than a web browser. Each build and deployment is tracked, audited, and summarized in the Jenkins and Octopus dashboards.

But those that have seen their code put in customer’s hands know that while nothing inspires more confidence than the first 10 minutes of a production deployment, it is the following hours and days that are hard. Database backups need to be managed, operating system updates need to be scheduled, logs need to be collected to diagnose support issues, and some good, old fashioned turning-it-off-and-on-again will need to be performed.

In the next blog post, we’ll show examples of these maintenance processes implemented in runbooks to complete the final stage of our pipeline: operations.

Continuous Integration/Deployment Release (computing) Octopus (software) Release management Integration Docker (software) Java (programming language) Jenkins (software) Pipeline (software) Command-line interface

Published at DZone with permission of Matthew Casperson, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Java CI/CD: From Local Build to Jenkins Continuous Integration
  • Implementing CI/CD Pipelines With Jenkins and Docker
  • Testcontainers: From Zero To Hero [Video]
  • Why Incorporate CI/CD Pipeline in Your SDLC?

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!