DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Automating Infrastructure Provisioning, Configuration, and Application Deployment

Automating Infrastructure Provisioning, Configuration, and Application Deployment

This article shows how to automate the entire stack: from infrastructure provisioning, configuration, application deployment, and starting and stopping the stack itself.

Han Chiang user avatar by
Han Chiang
·
Sep. 28, 22 · Tutorial
Like (1)
Save
Tweet
Share
4.60K Views

Join the DZone community and get the full member experience.

Join For Free

As a software engineer, I read and write application code every day. Occasionally, I work a little on CI/CD pipeline - Fix build issues, deploy a new application, etc. However, I have little understanding of how the entire process is orchestrated and automated.

So, I began with the goal of learning how to automate the entire stack: from infrastructure provisioning, configuration, application deployment, and starting and stopping the stack itself.

After spending some time crawling through Reddit and Stack Overflow, I gained a basic understanding of the whole process and the tools that are commonly used.

Overview 

The target application is a URL shortener that makes a long URL into a shorter one, like TinyURL.

Backend repository: GitHub link

Infra repository: GitHub link

Technologies used:

  • Provisioning: packer, terraform
  • Configuration: ansible
  • Deployment: docker, GitHub actions
  • Cloud computing: AWS EC2

Provisioning, Configuration, and Deployment

Deployment pipeline

What Are Provisioning, Configuration, and Deployment?

It can be confusing to understand what provisioning, configuration, and deployment mean exactly.

My understanding is summarised in a timeline: provisioning -> configuration -> deployment

Provisioning

Provisioning is the process of setting up servers to be ready for use.

In the context of cloud computing, this server is likely a virtual machine on which a machine image is installed.

Here, the operating system, along with other required software, is installed.

In AWS terms, this means creating a VPC, subnet, route table, and EC2.

IaaS, PaaS, SaaS.

IaaS, PaaS, SaaS. Source: RedHat

The diagram above is useful to remember which layers we have to take care of under different cloud computing models.

Configuration

Configuration can be considered as an extension to the process of provisioning the server. Once the server is created and loaded with the required software, it needs to be configured so that it is usable.

e.g., Configure PostgreSQL data directory, logging and authentication settings

Deployment

With the infrastructure ready to serve live traffic, an application can finally be deployed from an artifact repository to the server.

This could be as simple as SSH into the server, pulling a docker image, running a container, or deploying to a Kubernetes cluster with helm.

Visualizing the Whole Pipeline

Provisioning and configuration management. Source: golibrary

Provisioning and configuration management. Source: golibrary


1. Deployment 

I chose deployment as the first step because this is the easiest part of the pipeline. After building features and committing code, the next step is to deploy the new application.

Application Setup

The URL shortener application is built on docker. Docker compose is used for the convenience of local setup. It consists of 3 containers: nodejs server, postgres, redis


YAML
 
version: '3.4'



services:

  backend:

    build:

      context: .

      target: dev

      dockerfile: Dockerfile

    image: url-shortener:dev

    command: npm run debug

    environment:

      - NODE_ENV=development

      - POSTGRES_HOST=postgres

      - POSTGRES_USER=root

      - POSTGRES_PASSWORD=root

      - POSTGRES_DB=url_shortener

      - POSTGRES_PORT=5432

      - REDIS_URL=redis://redis:6379/0

      - URL_REDIRECT_DOMAIN=http://localhost:3000

    ports:

      - "3000:3000"

    depends_on:

      - postgres

      - redis



  postgres:

    image: postgres:13.1

    environment:

      - POSTGRES_USER=root

      - POSTGRES_PASSWORD=root

      - POSTGRES_DB=url_shortener

    ports:

      - "5432:5432"

    volumes:

      - postgres-volume-dev:/var/lib/postgresql/data



  redis:

    image: redis:6.0

    ports:

      - "6379:6379"

    volumes:

      - redis-volume-dev:/data



  nginx:

    build:

      context: ./nginx

      dockerfile: Dockerfile.nginx

    image: url-shortener-nginx:dev

    ports:

      - "80:80"

    depends_on:

      - backend



volumes:

  postgres-volume-dev:

  redis-volume-dev:


Deploying the Application

Application deployment is done with github actions.

YAML
 
name: Build and deploy

on:

  workflow_run:

    workflows: [Test and build]

    types:

      - completed

    branches:

      - master

env:

  SSH_USER: ${{ secrets.SSH_USER }}

  SSH_HOST: ${{ secrets.SSH_HOST }}

  APP_NAME: url_shortener

  IMAGE_REGISTRY: ghcr.io/${{ github.repository_owner }}

  REGISTRY_USER: ${{ github.actor }}

jobs:

  build_and_upload:

    runs-on: ubuntu-latest

    if: ${{ github.event.workflow_run.conclusion == 'success' }}

    env:

      DOCKER_BUILD_PUSH_PASSWORD: ${{ secrets.DOCKER_BUILD_DEPLOY_TOKEN }}

    steps:

      - uses: actions/checkout@v3

      - name: Build docker image

        id: build_image

        run: docker build -t "$IMAGE_REGISTRY/$APP_NAME:$GITHUB_SHA" --target release .

      - name: Log in to github container registry

        id: login_registry

        run: echo $DOCKER_BUILD_PUSH_PASSWORD | docker login ghcr.io -u $REGISTRY_USER --password-stdin

      - name: Push to registry

        id: push_image

        run: docker push "$IMAGE_REGISTRY/$APP_NAME:$GITHUB_SHA"

      - name: Echo outputs

        run: |

          echo "${{ toJSON(steps.push_image.outputs) }}"

  health_check:

    runs-on: ubuntu-latest

    needs: [build_and_upload]

    steps:

      - name: Check whether URL shortener is up

        id: health_check

        run: |

          curl $SSH_HOST/healthz

  notify_unsuccessful_health_check:

    runs-on: ubuntu-latest

    needs: [health_check]

    if: ${{ failure() }}

    steps:

      - name: Send slack notification

        env:

          SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

        run: |

          now=$(date +%Y-%m-%dT%H:%M:%S)

          payload=$(echo "{\"text\":\"URL shortener backend: Health check for $SSH_HOST/healthz failed at <DATE>. Workflow: $GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID\"}" | sed "s~<DATE>~$now~")

          curl -X POST -H 'Content-type: application/json' --data "$payload" $SLACK_WEBHOOK

          exit 1

  deploy:

    runs-on: ubuntu-latest

    needs: [health_check]

    steps:

      - uses: actions/checkout@v3        

      - name: Configure SSH

        id: ssh

        env:

          SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}

        run: |

          mkdir -p ~/.ssh/

          echo "$SSH_PRIVATE_KEY" > ~/.ssh/url_shortener_rsa

          chmod 600 ~/.ssh/url_shortener_rsa

          SSH_HOST_IP=$(nslookup $SSH_HOST | tail -n 2 | head -n 1 | cut -d ' ' -f 2)

          echo "host name: $SSH_HOST, host ip address: $SSH_HOST_IP"

          cat << EOF >> ~/.ssh/config

          Host production

            HostName $SSH_HOST

            User $SSH_USER

            IdentityFile ~/.ssh/url_shortener_rsa

            StrictHostKeyChecking no

          EOF

      - name: SSH into server, pull image, run container

        id: deploy

        env:

          NODE_ENV: production

          POSTGRES_HOST: ${{ secrets.POSTGRES_HOST }}

          POSTGRES_USER: ${{ secrets.POSTGRES_USER }}

          POSTGRES_PASSWORD: ${{ secrets.POSTGRES_PASSWORD }}

          POSTGRES_DB: ${{ secrets.POSTGRES_DB }}

          REDIS_URL: ${{ secrets.REDIS_URL }}

          ALLOWED_ORIGINS: ${{ secrets.ALLOWED_ORIGINS }}

          URL_REDIRECT_DOMAIN: ${{ secrets.URL_REDIRECT_DOMAIN }}

          DOCKER_PULL_PASSWORD: ${{ secrets.DOCKER_PULL_TOKEN }}

        run: |

          ssh production << EOF

          echo "Logging into container registry"

          echo $DOCKER_PULL_PASSWORD | docker login ghcr.io -u $REGISTRY_USER --password-stdin

          echo "Pulling image"

          docker pull "$IMAGE_REGISTRY/$APP_NAME:$GITHUB_SHA"

          echo "Stopping existing container"

          docker stop $APP_NAME && docker rm $APP_NAME

          echo "Starting new container"

          docker run --name $APP_NAME -p 3000:3000 --network host -e NODE_ENV=$NODE_ENV -e POSTGRES_HOST=$POSTGRES_HOST \

          -e POSTGRES_USER=$POSTGRES_USER -e POSTGRES_PASSWORD=$POSTGRES_PASSWORD -e POSTGRES_DB=$POSTGRES_DB \

          -e REDIS_URL=$REDIS_URL -e URL_REDIRECT_DOMAIN=$URL_REDIRECT_DOMAIN -e ALLOWED_ORIGINS=$ALLOWED_ORIGINS \

          -d "$IMAGE_REGISTRY/$APP_NAME:$GITHUB_SHA"

          sleep 5

          docker ps

          EOF

  notify_unsuccessful:

    runs-on: ubuntu-latest

    needs: [deploy]

    if: ${{ failure() }}

    steps:

      - name: Send slack notification unsuccessful run    

        env:

          SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

        run: |

          now=$(date +%Y-%m-%dT%H:%M:%S)

          payload=$(echo "{\"text\":\"URL shortener backend: Deployment failed at <DATE>. Workflow: $GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID\"}" | sed "s~<DATE>~$now~")

          curl -X POST -H 'Content-type: application/json' --data "$payload" $SLACK_WEBHOOK

  notify_successful:

    runs-on: ubuntu-latest

    needs: [deploy]

    steps:

      - name: Send slack notification successful run

        env:

          SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

        run: |

          now=$(date +%Y-%m-%dT%H:%M:%S)

          payload=$(echo "{\"text\":\"URL shortener backend: Deployment succeeded at <DATE>. Workflow: $GITHUB_SERVER_URL/$GITHUB_REPOSITORY/actions/runs/$GITHUB_RUN_ID\"}" | sed "s~<DATE>~$now~")

          curl -X POST -H 'Content-type: application/json' --data "$payload" $SLACK_WEBHOOK


The docker image is built and pushed to the Github container registry.

Then, it connects to the EC2 instance via SSH, and pulls and runs the new image.

2. Provisioning

With the deployment part of the process handled, let's move on to provisioning the server.

There are 2 parts to provisioning:

  • Build a system image with all the necessary software
  • Build the infrastructure and servers

Building Image With Packer

Packer is used to creating a golden AMI with software and configurations, which is installed on the EC2 instance.

I used packer to create a user and install PostgreSQL, Redis, Nginx, and Docker.

Using packer here takes advantage of immutable infrastructure, in which a standardised fleet of servers is built, achieving consistency and maintainability.


Shell
 
# source blocks are generated from your builders; a source can be referenced in

# build blocks. A build block runs provisioners and post-processors on a

# source.

source "amazon-ebs" "url_shortener" {

  ami_name      = "url_shortener"

  instance_type = "t2.micro"

  region        = var.region

  force_deregister   = true

  force_delete_snapshot = true

  ssh_username = "ubuntu"



  source_ami_filter {

    filters = {

      name                = "ubuntu/images/*ubuntu-focal-20.04-amd64-server-*"

      root-device-type    = "ebs"

      virtualization-type = "hvm"

    }

    most_recent = true

    owners      = ["099720109477"]

  }



  tags = {

    Name = "URL_shortener"

  }

}



build {

    sources = ["source.amazon-ebs.url_shortener"]



    provisioner "file" {

      source = var.ssh_public_key_src_path

      destination = var.ssh_public_key_dest_path

    }



    provisioner "file" {

      source = var.postgres_password_src_path

      destination = var.postgres_password_dest_path

    }



    provisioner "shell" {

      scripts = ["./scripts/setup-user.sh"]

      env = {

        SSH_PUBLIC_KEY_PATH: var.ssh_public_key_dest_path

        USER: ""

      }

    }



    provisioner "shell" {

      scripts = ["./scripts/install-postgres.sh"]

      env = {

        POSTGRES_PASSWORD_PATH: var.postgres_password_dest_path

        FS_MOUNT_PATH: var.fs_mount_path

        USER: ""

      }

    }



    provisioner "shell" {

      scripts = ["./scripts/install-redis.sh"]

    }



    provisioner "shell" {

      scripts = ["./scripts/install-nginx.sh"]

      env = {

        FS_MOUNT_PATH: var.fs_mount_path

        USER: "",

      }

    }



    provisioner "shell" {

      scripts = ["./scripts/install-docker.sh"]

      env = {

        USER: ""

      }

    }



    provisioner "shell" {

      inline = [

        "sudo file -s ${var.ebs_device_name}",

        "sudo lsblk -f",

        "df -h"

      ]

    }

}


Building Infrastructure With Terraform

After that, the infrastructure is orchestrated with Terraform.

It takes care of creating a VPC, subnet, EC2, route table, and security groups.

Terraform is a tool that allows us to build the entire infrastructure in the correct order and keeps track of the desired state in a declarative style.

It can also detect state drift by synchronizing its state to match the real-world state.

Again, the benefits of consistency and maintainability are present.


Shell
 
terraform {

  required_providers {

    aws = {

      source  = "hashicorp/aws"

      version = "~> 4.20.1"

    }

  }

  required_version = ">= 0.14.5"

}



provider "aws" {

  region = var.region

}



resource "aws_vpc" "vpc" {

  cidr_block           = var.cidr_vpc

  enable_dns_support   = true

  enable_dns_hostnames = true

}



resource "aws_internet_gateway" "igw" {

  vpc_id = aws_vpc.vpc.id

}



resource "aws_subnet" "subnet_public" {

  vpc_id     = aws_vpc.vpc.id

  cidr_block = var.cidr_subnet

  availability_zone = var.ec2_az

}



resource "aws_route_table" "rtb_public" {

  vpc_id = aws_vpc.vpc.id



  route {

    cidr_block = "0.0.0.0/0"

    gateway_id = aws_internet_gateway.igw.id

  }

}



resource "aws_route_table_association" "rta_subnet_public" {

  subnet_id      = aws_subnet.subnet_public.id

  route_table_id = aws_route_table.rtb_public.id

}



resource "aws_security_group" "sg_22_80_443" {

  name   = "sg_22_80_443"

  vpc_id = aws_vpc.vpc.id



  ingress {

    from_port   = 22

    to_port     = 22

    protocol    = "tcp"

    cidr_blocks = ["0.0.0.0/0"]

  }



  ingress {

    from_port   = 80

    to_port     = 80

    protocol    = "tcp"

    cidr_blocks = ["0.0.0.0/0"]

  }



  ingress {

    from_port   = 443

    to_port     = 443

    protocol    = "tcp"

    cidr_blocks = ["0.0.0.0/0"]

  }



  egress {

    from_port   = 0

    to_port     = 0

    protocol    = "-1"

    cidr_blocks = ["0.0.0.0/0"]

  }

}



resource "aws_ebs_snapshot" "url_shortener_ebs_snapshot" {

  # https://github.com/hashicorp/terraform/issues/24527

  # ebs_block_device is a set, not a list

  volume_id = var.url_shortener_ebs



  tags = {

    Name = "URL_shortener"

  }

}



resource "aws_volume_attachment" "data_attachment" {

  device_name = "/dev/xvdf"

  volume_id   = var.url_shortener_ebs

  instance_id = aws_instance.web.id

}



resource "aws_instance" "web" {

  ami                         = data.aws_ami.ec2_ami.id

  instance_type               = var.ec2_instance_type

  subnet_id                   = aws_subnet.subnet_public.id

  vpc_security_group_ids      = [aws_security_group.sg_22_80_443.id]

  availability_zone = var.ec2_az

  associate_public_ip_address = true



  root_block_device {

    delete_on_termination = true

    volume_size = 8

    volume_type = "gp2"



    tags = {

      Name = "URL_shortener"

    }

  }



  # Wait for EC2 to be ready

  provisioner "remote-exec" {

    inline = ["echo 'EC2 is ready'"]



    connection {

      type = "ssh"

      user = var.ssh_user

      host = self.public_ip

      private_key = file(var.ssh_private_key_path)

    }

  }



  tags = {

    Name = "URL_shortener"

  }

}



output "public_ip" {

  value = aws_instance.web.public_ip

}



output "public_dns" {

  value = aws_instance.web.public_dns

}



output "ebs_root_device_id" {

  value = aws_instance.web.root_block_device.0.volume_id

}



output "ebs_root_device_name" {

  value = aws_instance.web.root_block_device.0.device_name

}



output "aws_ebs_snapshot" {

  value = aws_ebs_snapshot.url_shortener_ebs_snapshot.id

}


3. Configurations

Lastly, configurations are the missing middle piece in the entire pipeline. 

After the system image is built with packer and AWS infrastructure is created with terraform, there are a few configuration tasks that need to be done in order for the server to be ready to serve live traffic.

This is done with Ansible, which is a simple yet powerful tool that allows us to configure servers in yaml.

I am using ansible to run shell scripts imperatively instead of managing configurations declaratively using the pre-defined modules because I found it the easiest way to learn.

Set Up File System for Postgres

Set up a file system on a separate EBS volume to store Postgres data and other relevant data.

The reason for using separate storage from the root volume is so that all my application data is preserved each time a new server is provisioned.


YAML
 
- hosts: URL_shortener

  name: Configure file system, set data directory for postgres

  remote_user: "{{ USER }}"

  gather_facts: no

  vars_files:

    - ./vars.yml

  tasks:

    - name: Format file system

      args:

        executable: /bin/bash

      register: format_file_system

      shell: |

        file_system=$(lsblk -f | grep xvdf | awk '{print $2}')

        

        # IMPORTANT!!! Format the disk only if it is not already formatted, otherwise existing data will be wiped out

        if [ -z "$file_system" ]

        then

          echo "Formatting disk {{ EBS_DEVICE_PATH }}"

          sudo mkfs -t ext4 {{ EBS_DEVICE_PATH }}

        else

          echo "{{ EBS_DEVICE_PATH }} is already formatted"

        fi

    - debug: var=format_file_system.stdout_lines

    - debug: var=format_file_system.stderr_lines

    - name: Mount file system

      args:

        executable: /bin/bash

      register: configure_file_system

      shell: |

        file_system=$(lsblk -f | grep xvdf | awk '{print $2}')

        sudo mkdir -p {{ FS_MOUNT_PATH }}

        sudo mount {{ EBS_DEVICE_PATH }} {{ FS_MOUNT_PATH }}

        # Automatically mount an attached volume after reboot

        uuid=$(sudo blkid {{ EBS_DEVICE_PATH }} -s UUID -o value)

        num_existing_line=$(sudo cat /etc/fstab | grep $uuid | wc -l)

        if [ "$num_existing_line" -eq 0 ]

        then

          sudo cp /etc/fstab /etc/fstab.orig

          echo "UUID=$uuid  {{ FS_MOUNT_PATH }}  $file_system  defaults,nofail  0  2" | sudo tee -a /etc/fstab > /dev/null

          # Verify

          sudo umount {{ FS_MOUNT_PATH }}

          sudo mount -a

        else

          echo "{{ FS_MOUNT_PATH }} is already added to /etc/fstab"

        fi

        sudo file -s {{ EBS_DEVICE_PATH }}

        sudo lsblk -f

        df -h

    - debug: var=configure_file_system.stdout_lines

    - debug: var=configure_file_system.stderr_lines


Configure Postgres

Configure the Postgres data directory and copy data over.


YAML
 
- hosts: URL_shortener

  name: Configure file system, set data directory for postgres

  remote_user: "{{ USER }}"

  gather_facts: no

  vars_files:

    - ./vars.yml

  tasks:

    - name: Create postgres data directory on mounted volume and copy files from default directory

      args:

        executable: /bin/bash

      register: create_postgres_data_directory

      shell: |

        postgres_path="{{ FS_MOUNT_PATH }}/postgresql/13/data"

        if [ -d "$postgres_path" ]

        then

          echo "$postgres_path already exist."

        else

          sudo mkdir -p $postgres_path

          sudo chown -R postgres:postgres $postgres_path

          sudo chmod 700 $postgres_path

        fi

        # IMPORTANT!!! Do not copy files if they already exist, otherwise postgres data will be overwritten

        sudo rsync -av --ignore-existing /var/lib/postgresql/13/main/ $postgres_path

    - debug: var=create_postgres_data_directory.stdout_lines

    - debug: var=create_postgres_data_directory.stderr_lines

    - name: Configure postgresql.conf

      args:

        executable: /bin/bash

      register: postgres_conf_data_directory

      shell: |

        postgres_path="{{ FS_MOUNT_PATH }}/postgresql/13/data"

        sudo mkdir -p /etc/postgresql/13/main/conf.d

        cat <<EOF | sudo tee /etc/postgresql/13/main/conf.d/postgresql.conf > /dev/null 

        #------------------------------------------------------------------------------

        # FILE LOCATIONS

        #------------------------------------------------------------------------------

        data_directory = '$postgres_path'

        #------------------------------------------------------------------------------

        # REPORTING AND LOGGING

        #------------------------------------------------------------------------------

        # - Where to Log -

        log_destination = 'stderr'

        logging_collector = on

        log_directory = 'log'

        log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'

        log_file_mode = 0600

        log_rotation_age = 1d

        log_rotation_size = 10MB

        # - When to Log -

        log_min_messages = warning

        log_min_error_statement = error

        log_min_duration_sample = 100

        log_statement_sample_rate = 0.5

        log_transaction_sample_rate = 0.5

        # - What to Log -

        log_duration = on

        log_line_prefix = '%m [%p] [%d] %u %a'

        EOF

    - debug: var=postgres_conf_data_directory.stdout_lines

    - debug: var=postgres_conf_data_directory.stderr_lines

    - name: Configure postgres systemd

      args:

        executable: /bin/bash

      register: postgres_systemd

      shell: |

        postgres_path="{{ FS_MOUNT_PATH }}/postgresql/13/data"

        sudo mkdir -p /etc/systemd/system/postgresql.service.d

        sudo touch /etc/systemd/system/postgresql.service.d/override.conf

        num_existing_line=$(sudo cat /etc/systemd/system/postgresql.service.d/override.conf | grep Environment=PGDATA=$postgres_path | wc -l)

        

        if [ "$num_existing_line" -eq 0 ]

        then

          cat <<EOF | sudo tee -a /etc/systemd/system/postgresql.service.d/override.conf > /dev/null

          [Service]

          Environment=PGDATA=$postgres_path

        EOF

        else

          echo "PGDATA is already added to /etc/systemd/system/postgresql.service.d/override.conf"

        fi

        sudo systemctl daemon-reload

        sudo systemctl restart postgresql.service

    - debug: var=postgres_systemd.stdout_lines

    - debug: var=postgres_systemd.stderr_lines


Configure Nginx

Configure SSL on Nginx, redirect HTTP to HTTPS, and proxy all requests to the port that the URL shortener container is listening on.


YAML
 
- hosts: URL_shortener

  name: Configure SSL for nginx using acme

  remote_user: "{{ USER }}"

  gather_facts: no

  vars_files:

    - ./vars.yml

  tasks:

    - name: Install required packages

      args:

        executable: /bin/bash

      become: yes

      become_method: su

      become_user: root

      become_exe: "sudo su -"

      register: install_required_packages

      shell: |

        export DEBIAN_FRONTEND=noninteractive

        apt-get -y install socat

        curl https://get.acme.sh | sh -s email={{ ADMIN_EMAIL }}

    - debug: var=install_required_packages.stdout_lines

    - debug: var=install_required_packages.stderr_lines



    - name: Install SSL cert

      args:

        executable: /bin/bash

      register: install_ssl

      become: yes

      become_method: su

      become_user: root

      become_exe: "sudo su -"

      async: 300

      poll: 15

      loop: "{{ DOMAINS }}"

      shell: |

        mkdir -p /etc/ssl/{{ item }}

        SSL_CERT_FILE_PATH="/etc/ssl/{{ item }}/certificate.crt"

        SSL_KEY_FILE_PATH="/etc/ssl/{{ item }}/private.key"

        if [ -f "$SSL_CERT_FILE_PATH" ] && [ -f "$SSL_KEY_FILE_PATH" ]

        then

          echo "SSL certs already exist for {{ item }}"

        else

          ~/.acme.sh/acme.sh --issue -d {{ item }} --nginx

          ~/.acme.sh/acme.sh --install-cert -d {{ item }} --key-file $SSL_KEY_FILE_PATH \

          --fullchain-file $SSL_CERT_FILE_PATH --reloadcmd "sudo systemctl reload nginx"

        fi

        # Run cron to renew certs to verify result

        ~/.acme.sh/acme.sh --cron

    - debug:

        var: item.stdout_lines

      loop: "{{ install_ssl.results }}"



    - name: Update nginx config

      args:

        executable: /bin/bash

      register: update_nginx_config

      shell: |

        NGINX_DIRECTORY="{{ FS_MOUNT_PATH }}/nginx"

        sudo mkdir -p $NGINX_DIRECTORY/{{ DOMAIN }}

        sudo mkdir -p $NGINX_DIRECTORY/{{ URL_REDIRECT_DOMAIN }}

        cat <<EOF | sudo tee /etc/nginx/sites-available/{{ DOMAIN }} > /dev/null

        upstream url_shortener_backend {

          server localhost:3000;

        }

        server {

          listen 80;

          listen [::]:80;

          server_name {{ DOMAIN }};

          return 301 https://\$host\$request_uri;

        }

        limit_req_zone \$binary_remote_addr zone=url_shorten_limit:10m rate=3r/s;

        server {

          listen 443 ssl;

          ssl_certificate      /etc/ssl/{{ DOMAIN }}/certificate.crt;

          ssl_certificate_key  /etc/ssl/{{ DOMAIN }}/private.key;

          server_name {{ DOMAIN }};

          access_log $NGINX_DIRECTORY/{{ DOMAIN }}/access.log;

          error_log $NGINX_DIRECTORY/{{ DOMAIN }}/error.log;

          server_tokens off;

          client_max_body_size 1m;

          location = /favicon.ico {

            return 204;

            access_log     off;

            log_not_found  off;

          }

          location / {

            limit_req zone=url_shorten_limit burst=6 delay=2;

            limit_req_status 444;

            proxy_pass http://url_shortener_backend;

            proxy_set_header X-Real-IP \$remote_addr;

            proxy_set_header X-Forwarded-Host \$host;

            proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;

            proxy_set_header X-Forwared-Proto \$scheme;

          }

        }

        server {

          listen 80;

          listen [::]:80;

          server_name {{ URL_REDIRECT_DOMAIN }};

          return 301 https://\$host\$request_uri;

        }

        limit_req_zone \$binary_remote_addr zone=url_redirect_limit:10m rate=20r/s;

        server {

          listen 443 ssl;

          ssl_certificate      /etc/ssl/{{ URL_REDIRECT_DOMAIN }}/certificate.crt;

          ssl_certificate_key  /etc/ssl/{{ URL_REDIRECT_DOMAIN }}/private.key;

          server_name {{ URL_REDIRECT_DOMAIN }};

          access_log $NGINX_DIRECTORY/{{ URL_REDIRECT_DOMAIN }}/access.log;

          error_log $NGINX_DIRECTORY/{{ URL_REDIRECT_DOMAIN }}/error.log;

          server_tokens off;

          client_max_body_size 1m;

          location = /favicon.ico {

            return 204;

            access_log     off;

            log_not_found  off;

          }

          

          location / {

            limit_req zone=url_redirect_limit burst=40 nodelay;

            limit_req_status 444;

            proxy_pass http://url_shortener_backend;

            proxy_set_header X-Real-IP \$remote_addr;

            proxy_set_header X-Forwarded-Host \$host;

            proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;

            proxy_set_header X-Forwared-Proto \$scheme;

          }

        }

        EOF

        sudo ln -sf /etc/nginx/sites-available/{{ DOMAIN }} /etc/nginx/sites-enabled/

        sudo nginx -t

        sudo systemctl reload nginx

    - debug: var=update_nginx_config.stdout_lines

    - debug: var=update_nginx_config.stderr_lines


Schedule Start and Stop

As an additional step, I wanted to schedule the start and stop of the EC2 and application via scheduled workflow in GitHub actions. 

This consist of the tasks defined in the configuration step, as well as interaction with AWS CLI and GitHub API.

The script can be found here.

Summary

The end result achieved is a full stack pipeline from infrastructure provisioning to application deployment.

Now that I have a basic understanding of DevOps, the next step for me is to explore site reliability, that is, observability.

AWS Continuous Integration/Deployment Infrastructure Kubernetes application Data (computing) Docker (software) shell workflow PostgreSQL

Published at DZone with permission of Han Chiang. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Full Lifecycle API Management Is Dead
  • Reliability Is Slowing You Down
  • What Are the Benefits of Java Module With Example
  • Introduction to Spring Cloud Kubernetes

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: