{{announcement.body}}
{{announcement.title}}

CI/CD Processes And Tools For AWS Elastic Beanstalk

DZone 's Guide to

CI/CD Processes And Tools For AWS Elastic Beanstalk

Take a look at how some of these CI/CD tools, including CircleCI, AWS CodePipeline, and Jenkins, can work with AWS Elastic Beanstalk.

· DevOps Zone ·
Free Resource

CI/CD processes allow developers to worry less about infrastructure and deployment, and more about developing robust and reliable apps. Developers can be more involved in monitoring how a production environment behaves when new codes are introduced. Quality assurance becomes a part of the development process too.

The introduction of a CI/CD pipeline to a development workflow also means updates are no longer stressful. There is no need to bring the entire app down just to update a small portion of it. Services like AWS Elastic Beanstalk certainly make integrating pipelines services like EC2 and S3 even easier to do, since Elastic Beanstalk handles most of the heavy lifting for you.

Designing Good CI/CD Processes

Before we get to automated deployment to Elastic Beanstalk, however, it is important to refine the CI/CD pipeline for the cloud environment. Elastic Beanstalk is designed from the ground up to be robust and scalable, so you have to adapt your pipeline to fully leverage the advantages offered by the cloud platform.

For starters, you can standardize the development and deployment environment. A pipeline run must not be allowed to modify the environment because additional runs will face difficulties in adapting to the new changes. Instead, the runtime must keep your environment clean.

Quality assurance and security checks need to be parts of the process. For the pipeline to have shorter cycles while remaining reliable, these tasks need to be embedded into the workflow. You can keep your cloud environment safe by integrating security into development.

Last but certainly not least, make sure reviews and checks are performed before deployment. Scripts need to be checked against known attacks and malicious scripts databases to make sure that bad lines of code never get to the production environment.

That last part is important. You can configure Elastic Beanstalk to expect certain flags or configuration entries before deploying a new update. As for integrating the pipeline itself, the process is fairly easy with the tools that are now available.

CircleCI As A Bridge

CircleCI is a very popular tool for managing CI/CD pipelines. The tool is designed to work seamlessly with various Amazon services, including AWS Elastic Beanstalk. Integrating Elastic Beanstalk is easy when done through CircleCI.

You start by creating the environment needed for your CI/CD pipeline. You can separate staging and production, have an extra cloud instance for development, and configure the pipeline in any way you like. Make sure you create config files that match your development and deployment cycles.

You then need to add a new role to your AWS Management Console or IAM console. The role is created specifically for CircleCI to access your cloud environment. Add your AWS credentials to your CircleCI, and you are all set.

Deploying to Elastic Beanstalk—considering you already have a config file ready to go—is as simple as running the eb deploy command. Everything else happens automatically. You can use –profile to specify a specific profile.

Leveraging AWS CodePipeline For CI/CD

AWS CodePipeline is an Amazon Web Services tool that helps you build, test, and deploy code in continuous iterations every time there is a code change, based on the release process models that you define. Using CodePipeline allows you to orchestrate each step in your deployment release process. As part of your setup, you can also plug many other AWS services into CodePipeline to complete your software delivery pipeline, including AWS Elastic Beanstalk.

This involves first creating the pipeline using AWS CodePipeline, then leveraging a GitHub account, an Amazon Simple Storage Service (S3) bucket, or an AWS CodeCommit repository as the source location for the sample app’s code. AWS Elastic Beanstalk acts as the deployment target for the sample app. Your fully completed pipeline, using these tools in sync, is able to detect any changes made to the source repository containing the app code and then automatically update your live app to match.

Rapid Deployment With Jenkins

Another popular tool among developers is Jenkins. Jenkins is known for its support for even the most complex architecture, plus it works really well with AWS services through plugins and add-ons.

In the case of Elastic Beanstalk, you don’t have to worry about compatibility and reliability issues. The AWS Elastic Beanstalk Deployment plugin, which works with Jenkins 2.6x and up, simplifies the packaging and deployment of new applications to the Elastic Beanstalk environment.

The plugin lets you start the process by clicking on the Deploy into AWS Elastic Beanstalk; yes, it’s THAT simple. You can still configure how you want the deployment job to be performed, but everything else is completely automated.

Both new jobs and updates are handled with simple commands. You can build an archive and customize your build, but then let the plugin handle the rest. The same is true with environment update, since you only need to run CreateConfigurationTemplate to capture existing configs.

Now let’s see these processes in action.

Elastic Beanstalk deployment process

Start By Setting Up A CircleCI Pipeline

To begin with a CircleCI pipeline, head over to the website here and choose the "Log in with Github" option (you will need to authorize CircleCI through Github). 

CircleCI login
Untested Projects

Now, click "Settings and Projects" to locate your Github repositories

Choose Environment variables and insert your AWS IAM access key (AWS_ACCESS_KEY_ID ,  AWS_SECRET_ACCESS_KEY ). These will be necessary later on:

Environment Variables

Now, choose "Add Projects" on the sidebar, locate your Github repository and click "Set Up Project":

Git

A screen showing you how to set your project up will then pop up. The most critical item on the page is “Create a folder named .circleci and add a file config.yml ”

OS

If you look in the Github repository here, you can see that I have already created this file:

YAML
 




x
44


 
1
version: 2
2
jobs:
3
  test:
4
    working_directory: ~/app
5
    docker:
6
      - image: circleci/node:latest
7
    steps:
8
      - checkout
9
      - run:
10
          name: Update npm
11
          command: 'sudo npm install -g npm@latest'
12
      - restore_cache:
13
          key: dependency-cache-{{ checksum "package.json" }}
14
      - run:
15
          name: Install npm dependencies
16
          command: npm install
17
      - save_cache:
18
          key: dependency-cache-{{ checksum "package.json" }}
19
          paths:
20
            - ./node_modules
21
      - run:
22
          name: Run tests
23
          command: 'npm run test'
24
  deploy-aws:
25
    working_directory: ~/app
26
    docker:
27
      - image: circleci/python:latest
28
    steps:
29
      - checkout
30
      - run:
31
          name: Installing deployment dependencies
32
          working_directory: /
33
          command: 'sudo pip install awsebcli --upgrade'
34
      - run:
35
          name: Deploying application to Elastic Beanstalk
36
          command: eb deploy
37
workflows:
38
  version: 2
39
  build-test-and-deploy:
40
    jobs:
41
      - test
42
      - deploy-aws:
43
          requires:
44
            - test


The process of this build’s execution is as follows:

  1. A Node Docker container runs the initial task, test.
  2. If they exist in the cache, the node_modules are restored, otherwise, they get installed.
  3. Tests are run.
  4. At this point, Docker layer caching is leveraged to speed up the performance of the image building.
  5. The Elastic Beanstalk CLI tool now gets installed.
  6. The app gets deployed to Elastic Beanstalk with  eb deploy. This command works as our policy has authenticated us in CircleCI with the AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY environment variables.

So, how does EB deploy work out where to deploy to? To choose a location, you will need to configure the .elasticbeanstalk/config.yml file in your Github repository to match the Elastic Beanstalk application you have created:

YAML
 




xxxxxxxxxx
1
19


 
1
branch-defaults:
2
 master:
3
   environment: ebdev-env
4
environment-defaults:
5
 e:
6
   branch: null
7
   repository: null
8
global:
9
 application_name: CircleCI
10
 default_ec2_keyname: null
11
 default_platform: arn:aws:elasticbeanstalk:us-west-2::platform/Docker running on 64bit Amazon Linux/2.14.0
12
 default_region: us-west-2
13
 include_git_submodules: true
14
 instance_profile: null
15
 platform_name: null
16
 platform_version: null
17
 profile: null
18
 sc: git
19
 workspace_type: Application


ebdev

With everything now set up, click "Start Building" in the CircleCI dashboard and, your project should test, build, and deploy successfully.

3 jobs

Using AWS CodePipeline

Select the “Code Pipeline” service from the AWS console, it will be under Developer Tools.

AWS Code Pipeline

Click on the Create Pipeline button.

Choose pipeline settings

You’ll have to give a name to the pipeline and attach a service role.

In the advanced settings, you can choose where to store the artifacts associated with the project and also how you want the data to be encrypted at rest, and with which keys.

Add source stage

The next step is where you choose the source provider. These are the options that CodePipeline provides. This will be the location from where we want to import our code for the project.

If you choose GitHub, then you will have to connect to it and provide details. Identify the repository and branch name from where you want to pull code.

You can also then choose how you want the changes in the code to be detected, either using GitHub webhooks or AWS CodePipeline.

Add build stage

The next step is to build the code. This step is optional though depending on what language the code is written in.

For example, if the code is in PHP, then this step can be skipped, whereas if the code is in Java or Node.js, then we will have to use this step.

Add deploy stage

The last step is to deploy, we have to choose a deployment method.

If AWS Elastic Beanstalk is chosen, the region, along with the application name and the environment name should be mentioned.

You can then review all the details before creating the pipeline.

Jenkins

Establish a connection between Jenkins and SCM (ex: git/bitbucket). The connection can be achieved with password authentication or through SSH keys. We establish a connection through SSH keys by generating an SSH key on the Jenkins server and that provides both a public and private key. Collect the public SSH key and update that on SCM and use the private key on the Jenkins server as shown below.

Log in to the Jenkins server and execute the  ssh-keygen command and just proceed to click enter for the different fields to generate public and private ssh keys which are  id_rsa and  id_rsa.pub, respectively in .ssh folder of the home directory.

SSH keys
id_rsa public key

Copy the content of the id_rsa.pub key and update the data in Github’s repository setting > Deploy Keys > Add deploy key with some arbitrary title for the key.

Deploy keys

Click on the "Add deploy key," update some title and paste the content of id_rsa.pub key’s data at the key section and click on "Add" after checking the "Allow write access" option.

Jenkins Public key

Now copy the content of id_rsa(private key) file and update the Jenkins server credentials on the dashboard at Credentials > Systems > Global Credential s> Adding Some Credentials.

Jenkins dashboard > Credentials

After clicking on "Add some credentials," select the "SSH Username and Private" key option, update some random username, and paste the private key content that’s copied from id_rsa key and save.

Jenkins credentials

After that give a name as the ID, Description, User Name and check the enter directly option then paste the private key.

Jenkins private key
Jenkins SSH connectivity from Gitto Jenkins

Optimizing The Jenkins Pipeline

The Jenkins Pipeline provides a comprehensive suite of plugins that supports the full implementation and integration of a continuous delivery pipeline into Jenkins. Here, we would be writing Jenkins pipeline script in Groovy so that the job executes all the stages that include fetching the code from SCM, build, test, deploy, etc.

Creating a Jenkins pipeline job and configuring SCM periodically polls the SCM to check whether changes were made (i.e new commits), it builds the project if new commits were pushed since the last build and trigger jobs if any are shown below.

Click on a new item on the Jenkins dashboard, select Jenkins pipeline job.

Jenkins pipeline

Select the “Github Project” and Enter your Github or bitbucket repo. 

Github project

Select the “Poll SCM” under the build triggers and update the Cron expression. (ex: H/5 * * * *, that polls GitHub at every five minutes interval for recent commits)

Build triggers

The Pipeline gives us a couple of options for whether to update the Jenkins pipeline (Groovy) script in the job or whether to adjust the pipeline script from SCM.

The Pipeline script allows us to keep the script in Jenkins pipeline job. Pipelines are Jenkins jobs enabled by the Pipeline plugin and built with simple text scripts that use a Pipeline DSL (domain-specific language) based on the Groovy programming language.

Pipeline DSL (domain-specific language)

Select a pipeline and input a script.

Pipeline script

Then click on save.

Pipeline then fetches the DSL (Domain Specific Language) script from the SCM. This is typically called ‘Jenkinsfile’ and is located in the root of the project.

Pipeline Git

After selecting Git, we may have to update:

  • The Github Url with credentials to authenticate Github (global credentials that we have configured above)
  • The branch on which we are keeping the Jenkinsfile
  • And the script path where we keep Jenkinsfile on that specific branch
Jenkins file script

Then click on "Save."

As per the Jenkinsfile, we may see different stages of the job execution as shown below.

Job execution
CircleCI ebdev

The Right Tool For Elastic Beanstalk

So, which tool is best to use? All are capable in their own ways. Choosing between them means taking a close look at the kind of pipeline you want to set up and how you can best benefit from automation. They all integrate well with other Amazon services too, so you will have no trouble utilizing other services while automating your CI/CD pipeline.

Topics:
amazon web services ,automation ,aws ,aws elastic beanstalk ,continuous delivery ,continuous integration

Published at DZone with permission of Rajasekhar Mandava . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}