DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Terraforming Your Cloud Infrastructure The Right Way
  • Implementing Infrastructure as Code (IaC) for Data Center Management
  • Terraform State File: Key Challenges and Solutions
  • Automating AWS Infrastructure Testing With Terratest

Trending

  • My Favorite Interview Question
  • Tired of Spring Overhead? Try Dropwizard for Your Next Java Microservice
  • Endpoint Security Controls: Designing a Secure Endpoint Architecture, Part 2
  • Infrastructure as Code (IaC) Beyond the Basics
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Immutable Infrastructure CI/CD Using Hashicorp Terraform and Jenkins

Immutable Infrastructure CI/CD Using Hashicorp Terraform and Jenkins

This extensive article should leave few questions unanswered about creating your infrastructure.

By 
Radhakrishnan Rk user avatar
Radhakrishnan Rk
·
Jan. 07, 19 · Tutorial
Likes (19)
Comment
Save
Tweet
Share
95.5K Views

Join the DZone community and get the full member experience.

Join For Free

Infrastructure-as-Code has gained much popularity in because of its easy implementation and building a clean infrastructure with declarative programming model. This article covers the various approaches of building and maintaining your infrastructure with Terraform and Jenkins Server.

Introduction

DevOps methodologies and practices have transformed the complexities of IT infrastructure management into code that manages the entire IT infrastructure with little maintenance. We have a lot of configuration management tools, and orchestration tools to tailor our IT infrastructure-as-code, but selecting the right tool relies on numerous factors such as analyzing the pros and cons of the tools and understanding how it fits to our use case. They should ideally have no vendor lock-in, clear official documentation, good community support, easy integration with the platform, and be agnostic to different cloud providers and third party software.

Common Scenarios in IT Infrastructure Management

Provisioning and de-provisioning resources in a cloud environment is a common practice for testing and releasing a software product without any bugs. In conjunction with continuous integration and deployment tools, we may need to use both orchestration tools and configuration management tools as well. In any cloud environment, orchestration tools such as Terraform, CloudFormation, Heat, and AzureRM are responsible for provisioning infrastructure, and configuration management tools such as Chef, Ansible, Puppet, and Saltstack take care of installing the software packages in the server, configuring the services, and deploying the applications on top of it. But today, configuration management tools have to support, to some extent, provisioning resources in the cloud, and provisioning tools must support installing and configuring software on a newly created resource. It balances the complexity of provisioning and managing infrastructure. On the other hand, it is difficult to achieve everything with a single tool. The recommended way is to use both provisioning and configuration tools for managing infrastructure at scale.

Why Do We Need an Immutable Infrastructure?

Even we are managing an infrastructure with configuration management tools, there is a chance of having configuration drift in the servers if there are frequent configuration changes applied on the server. In order to avoid this situation, we should not change the configuration of the running server by either modifying it manually or through configuration management tools. Maintaining an immutable infrastructure is the best practice to avoid configuration drift. Immutable infrastructure is now becoming a popular term across the DevOps community. It is a practice of provisioning a new server for every config changes and d-provisioning the old ones. Provisioning tools like Terraform and CloudFormation support creating an immutable infrastructure to a great extent. For every software configuration changes, it will create a new infrastructure and deploy the configuration then delete the old ones. It will not create any confusions when we are managing a large infrastructure. We do not need to worry about the configuration changes and their impact over a period of time. In a production environment, DevOps practitioners often follow Blue-Green deployment to avoid unexpected issues, which leads to downtime in a production environment. Rollback is possible here and an application can enter into the previous state without any difficulties because we did not make any changes to the existing environment. Terraform helps to create an immutable infrastructure.

Image title

                                                            Fig. 1: Mutable Infrastructure


Image title

                                                            Fig. 2: Immutable Infrastructure


Infrastructure-as-Code

HashiCorp Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open-source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. Terraform supports storing the state of the infrastructure which helps us to prevent the configuration drift. We can store the state in the local environment or from remote key-value or object stores.

Terraform provider and backend configuration

                                                Fig. 3: Terraform provider and backend configuration

The syntax of Terraform configuration is called HashiCorp Configuration Language (HCL). Terraform can also support JSON configurations. It supports multiple providers for orchestration. The majority of code is written in GoLang. Terraform follows clear syntax to define the resources and it supports the most common data structures such as list, map, string to define the variables. It is quite simple to organize the code. We can read the credentials from the environment variables instead of defining it inside the Terraform configuration file.

Image title

                                                                    Fig. 4: Terraform Variables

Many open-source IDEs support development environments for Terraform modules. We can extend the functionality of the Terraform by writing custom plugins and provisioners to run script from bash, Ruby, Chef, etc. Reusable Terraform modules for various providers are available in Terraform registry. Terraform enterprise offers a web interface to manage Terraform and its state.

Image title

                                                             Fig. 5: Terraform Project Structure


Benefits of Using Terraform

Image title

                                                               Fig. 6: Blue Green Deployment with Terraform

  1. Defines infrastructure-as-code to increase operator productivity and transparency.

  2. Terraform configuration can be stored in version control, shared, and collaborated on by teams of operators.

  3. Tracks the complete history of infrastructure versions. Terraform state can be stored in a local disk as well as any one of the supported remote backends such as AWS S3, OpenStack Swift, Azure Blob, Consul, etc.

  4. Terraform provides an elegant user experience for operators to safely and predictably make changes to infrastructure.

  5. Terraform builds a dependency graph from the configurations, and walks this graph to generate plans, refresh state, and more.

  6. Separates plans and reduces mistakes and uncertainty at scale. Plans show operators what would happen,  and it applies execute changes.

  7. Terraform can be used to create resources across all major infrastructure providers (AWS, GCP, Azure, OpenStack, VMware, and more) and third-party tools such as Github, Bitbucket, New Relic, Consul, and Docker.

  8. Terraform lets operators easily use the same configurations in multiple places to reduce mistakes and save time.

  9. We can use the same Terraform configuration to provision identical staging, QA, and production environments.

  10. Common Terraform configurations can be packaged as modules and used across teams and organizations.

Image title

                                Fig. 7: Calling Terraform modules from the workspace


CI/CD Pipeline Workflow for Applying Changes to The Infrastructure Using Terraform

Image title

                                            Fig. 8: CI/Cd Pipeline for Terraform Using Jenkins

  1. The developer or operations engineer changes the Terraform configuration file in his local machine and commits the code to BitBucket.

  2. Gitbucket Webhook triggers a continuous integration job to Jenkins.

  3. Jenkins will pull the latest code from the configured repo which contains Terraform files to its workspace.

  4. It reads the Terraform configuration then initializes the remote consul backend.

  5. Terraform generates a plan about the changes that have to be applied on the infrastructure

  6. Jenkins sends a notification to a Slack channel about the changes for manual approval.

  7. Here, the user can approve or disapprove the Terraform plan.

  8. The user input is sent to Jenkins server for proceeding with the further action.

  9. Once the changes are approved by an operator, Jenkins will execute the terraform apply command to reflect the changes to the infrastructure.

  10. Terraform will create a report about the resources and its dependency created while executing the plan.

  11. Terraform will provision the resources in the provider environment.

  12. Jenkins will again send a notification to the Slack channel about the status of the infrastructure after the applying changes on it. Once the job is executed, Jenkins pipeline job is configured to clean up the workspace created by the job.


###Jenkinsfile###
import groovy.json.JsonOutput

//git env vars
env.git_url = 'https://user@bitbucket.org/user/terraform-ci.git'
env.git_branch = 'master'
env.credentials_id = '1'

//slack env vars
env.slack_url = 'https://hooks.slack.com/services/SDKJSDKS/SDSDJSDK/SDKJSDKDS23434SDSDLCMLC'
env.notification_channel = 'my-slack-channel'

//jenkins env vars
env.jenkins_server_url = 'https://52.79.46.98'
env.jenkins_node_custom_workspace_path = "/opt/bitnami/apps/jenkins/jenkins_home/${JOB_NAME}/workspace"
env.jenkins_node_label = 'master'
env.terraform_version = '0.11.10'

def notifySlack(text, channel, attachments) {
    def payload = JsonOutput.toJson([text: text,
        channel: channel,
        username: "Jenkins",
        attachments: attachments
    ])
    sh "export PATH=/opt/bitnami/common/bin:$PATH && curl -X POST --data-urlencode \'payload=${payload}\' ${slack_url}"
}

pipeline {
agent {
node {
customWorkspace "$jenkins_node_custom_workspace_path"
label "$jenkins_node_label"
} 
}

stages {
stage('fetch_latest_code') {
steps {
git branch: "$git_branch" ,
credentialsId: "$credentials_id" ,
url: "$git_url"
}
}

stage('install_deps') {
steps {
sh "sudo apt install wget zip python-pip -y"
sh "cd /tmp"
sh "curl -o terraform.zip https://releases.hashicorp.com/terraform/'$terraform_version'/terraform_'$terraform_version'_linux_amd64.zip"
sh "unzip terraform.zip"
sh "sudo mv terraform /usr/bin"
sh "rm -rf terraform.zip"
}
}

stage('init_and_plan') {
steps {
sh "sudo terraform init $jenkins_node_custom_workspace_path/workspace"
sh "sudo terraform plan $jenkins_node_custom_workspace_path/workspace"
notifySlack("Build completed! Build logs from jenkins server $jenkins_server_url/jenkins/job/$JOB_NAME/$BUILD_NUMBER/console", notification_channel, [])
}
}

stage('approve') {
steps {
  notifySlack("Do you approve deployment? $jenkins_server_url/jenkins/job/$JOB_NAME", notification_channel, [])
input 'Do you approve deployment?'
}
}

stage('apply_changes') {
steps {
sh "echo 'yes' | sudo terraform apply $jenkins_node_custom_workspace_path/workspace"
notifySlack("Deployment logs from jenkins server $jenkins_server_url/jenkins/job/$JOB_NAME/$BUILD_NUMBER/console", notification_channel, [])
}
}
}

post { 
  always { 
    cleanWs()
   }
  }


How to Setup the Deployment Environment:

  1. Create a repo in SCM tools like Gitlab or BitBucket and commit the Terraform configuration and its dependency module to the repo. If you are using any third-party remote module as a dependency, it will be automatically downloaded while execution.

  2. If you do not have Jenkins server, then just pull a Jenkins Docker image and run it in your local machine. If you are setting it up in a cloud environment, check the Jenkins virtual machine image from the marketplace to set up the environment and configure the required plugins.

  3. Create a webhook in your BitBucket repo settings to invoke an HTTP call to your Jenkins callback URL for triggering continuous integration job.

  4. If you have an existing Jenkins server, ensure pipeline plugin is installed in the Jenkins server. Otherwise go to "Manage Plugins" and install the pipeline plugin.

  5. In this project, we are using consul as a remote backend for state storing and state locking. It is not recommended to use in the local state for the case where multiple people involved in the project and for production deployments. It is good to use remote backend which provides highly available storage with state lock functionalities to avoid writing the state by multiple users at a time.

  6. If you do not have consul key-value store in your environment, just pull consul Docker image and set up a single node cluster. If it is production deployment, set up a distributed key-value store.

  7. Create an application in Slack and note down the slack integration details for configuring it in Jenkinsfile.

  8. Configure your provider details and backend details in the main Terraform configuration file either by environment variable or persisting in a repo. In my case, I am going to provision a resource in AWS and my CI server is hosted in AWS. So I am assigning an IAM Role to my server with sufficient privileges.

  9. Create a new project in Jenkins by using pipeline plugin.

  10. Add the Jenkinsfile where the pipeline stages are defined. Save the job and trigger it manually for testing. Then apply changes to the configuration and commit the changes to the BitBucket and ensure the job is automatically triggered. Check Jenkins log for more details about the job.

Image title

                                                                Fig. 9 Jenkins Log

Image title

                                                                        Fig. 10: Build History

It is recommended to use reusable modules in Terraform by writing your own modules and using modules from the Terraform Registry. We can also use the Docker Build Agent for Jenkins slave and save the workspace by attaching a persistent volume to the Jenkins server from the Docker host. It is recommended to encrypt the consul key-value with HashiCorp vault. It is a reliable key management service and it can be accessed by HTTP calls.

Image title

                                    Fig11. Ci Cd Using Hashicorp Terraform and Aws Code Pipeline

Right now, each cloud provider is offering their own CI tools. AWS offers code pipeline here we can use code commit for SCM, code build for build environment where we can apply Terraform configurations, sns to send notifications for manual approval and Azure offers Azure DevOps tools for creating CI/CD pipeline here user can commit the code to Azure TFS or any SCM through VSTS and it will trigger the CI job. we can setup the pipeline job based on the cloud platform that we are using. Here, Jenkins can be used in cloud as well as on-prem infrastructure.

Enjoy Terraforming!

Continuous Integration/Deployment Terraform (software) Infrastructure Jenkins (software)

Opinions expressed by DZone contributors are their own.

Related

  • Terraforming Your Cloud Infrastructure The Right Way
  • Implementing Infrastructure as Code (IaC) for Data Center Management
  • Terraform State File: Key Challenges and Solutions
  • Automating AWS Infrastructure Testing With Terratest

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!