Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Deploying Rancher Server in HA Mode With Infrastructor

DZone's Guide to

Deploying Rancher Server in HA Mode With Infrastructor

This article will show how to deploy a small Rancher Server cluster in HA mode with Infrastructor, an open source Infrastructure as Code framework.

· DevOps Zone
Free Resource

“Automated Testing: The Glue That Holds DevOps Together” to learn about the key role automated testing plays in a DevOps workflow, brought to you in partnership with Sauce Labs.

Infrastructor is an open source Infrastructure as Code framework written in Groovy. Rancher Server is an open source container management platform. This article will show how to use Infrastructor to deploy a small Rancher Server cluster of two nodes in HA mode. 

Requirements

We will use Vagrant and VirtualBox to create a set of virtual machines to deploy the services to. Make sure you have installed the following set of packages on your machine:

  • Oracle Java 8 (JDK or JRE)

  • Infrastructor (current version is rc-2017-08-12)

  • VirtualBox (current version is 5.1.26)

  • Vagrant (current version is 1.9.7)

Setup Configuration

A target configuration consists of two Rancher Server instances which are running in HA mode, and a MySQL instance to store and share rancher's metadata. Plus, we are going to install an HAProxy service to load balance HTTP requests between the Rancher instances.
Image title

A first step is to create four Ubuntu virtual machines with Vagrant and VirtualBox. Then we will run a provisioning script to deploy all services described above using Infrastructor.

Preparing Nodes for Deployment

Let's start with the creation of the nodes we are going to provision. To do so, create a folder and add a Vagrantfile to it:

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/xenial64"
  config.vm.provider "virtualbox" do |v|
    # make VM boot faster
    v.customize ["modifyvm", :id, "--uartmode1", "disconnected"]
  end
  config.vm.define "haproxy" do |node|
      node.vm.network "private_network", ip: "192.168.55.10"
      node.vm.provider :virtualbox do |vb|
        vb.memory = 512
      end
  end
  config.vm.define "rancher-1" do |node|
      node.vm.network "private_network", ip: "192.168.55.11"
      node.vm.provider :virtualbox do |vb|
        vb.memory = 2048
      end
  end
  config.vm.define "rancher-2" do |node|
      node.vm.network "private_network", ip: "192.168.55.12"
      node.vm.provider :virtualbox do |vb|
        vb.memory = 2048
      end
  end
  config.vm.define "mysql" do |node|
      node.vm.network "private_network", ip: "192.168.55.13"
      node.vm.provider :virtualbox do |vb|
        vb.memory = 1024
      end
  end
end

The Vagrantfile contains a declaration of four virtual machines based on a Ubuntu/xenial64 image. Each machine has a unique IP address from a private network. Infrastructor will initialize an SSH connection to provision services to the machines.

To launch all VMs, execute the following command from a terminal:

vagrant up

Inventory Declaration

Once all nodes are up and running, we can create an Infrastructor provisioning script with a definition of all tasks to setup required services. Let's define an inventory first:

inlineInventory {
  node (id: 'haproxy') {
     host = '192.168.55.10'
     username = 'ubuntu'
     keyfile = keypath('haproxy')  
     tags = [role: 'haproxy']
  }
  node (id: 'rancher-1') {
     host = '192.168.55.11'
     username = 'ubuntu'
     keyfile = keypath('rancher-1')  
     tags = [role: 'rancher']
  }
  node (id: 'rancher-1') {
     host = '192.168.55.12'
     username = 'ubuntu'
     keyfile = keypath('rancher-2')  
     tags = [role: 'rancher']
  }
  node (id: 'mysql') {
     host = '192.168.55.13'
     username = 'ubuntu'
     keyfile = keypath('haproxy')  
     tags = [role: 'haproxy']
  }
}.provision {
 // provisioning tasks will be here 
}

def keypath(def machine) { ".vagrant/machines/$machine/virtualbox/private_key" }

The code snippet above contains a declaration of inventory with four nodes. This information includes connectivity settings, plus some metadata like tags and ids. To specify keyfiles for each node, the keypath function has been defined. keypath returns a path to a private key, which Infrastructor will use to connect to a corresponding node. Save this file in the same directory where the Vagrantfile is located. We will execute this file with Infrastructor CLI later.

Docker Setup

To run all services (HAProxy, Rancher Server, and MySQL), we will use Docker. Let's install it on all nodes using Infrastructor. To do so, we define a task with three shell commands to execute on each node:

inlineInventory {
  // some line of code have been skipped
  // see the final script at the end of the article
}.provision {
  task name: 'install docker on all nodes', parallel: 4, actions: {
    shell "sudo apt update"
    shell "sudo curl -Ssl https://get.docker.com | sh"
    shell "sudo usermod -aG docker ubuntu"
  }
}

There are several ways to install Docker. In the example above, we run an installation script provided by get.docker.com. You can change the script and use 'apt install' if you want to, or apply any alternative installation approach you like. The installation of Docker will be executed in parallel for all nodes.

MySQL Setup

The next service we need to setup is MySQL, which will be used by Rancher Server instances to store configurations:

inlineInventory {
  // some line are skipped
  // see the complete script at the end of the article
}.provision {
  def MYSQL_PASSWORD = \
  input message: "Enter MySQL password for the rancher user: ", secret: true

  task name: 'run a mysql server', filter: {'role:mysql'}, actions: {
    info "launching a mysql instance"
    shell "docker rm -f rancher-storage || true"
    shell ("docker run -p 3306:3306 -d " + \
           "-e MYSQL_RANDOM_ROOT_PASSWORD=yes " + \
           "-e MYSQL_PASSWORD=$MYSQL_PASSWORD " + \
           "-e MYSQL_USER=rancher " + \
           "-e MYSQL_DATABASE=rancher " + \
           "--name rancher-storage mysql:5.6.37")

    info "wating for mysql instance is up and running"
    retry count: 10, delay: 2000, actions: {
      assert canConnectTo {
        port = 3306
        host = node.host
      }
    }
  }
}

The script above contains a task to launch a MySQL Docker container on the node with role:mysql tag. Before executing the task, the script will ask you to enter a rancher user password. We will need to enter it at runtime. After launching the MySQL container, Infrastructor will start checking the 3306 TCP port. When the port is ready to accept connections, we can go to the next step: Rancher server setup.

Rancher Server Setup

We will deploy two instances of Rancher servers on two corresponding nodes in parallel. We will also provide them connectivity settings to the previously launched MySQL server to let the Rancher instances share the same configuration:

inlineInventory {
  // some line are skipped
  // see the complete script at the end of the article
}.provision {
  task name: 'run a couple of rancher servers', parallel: 2, filter: {'role:rancher'}, actions: {
    shell "docker rm -f rancher-server || true"
    shell ("docker run -d -p 80:8080 " + \
           "-p 9345:9345 --restart=always " + \
           "--name=rancher-server rancher/server:v1.6.4 " + \
           "--advertise-address ${node.host} " + \
           "--advertise-http-port 80 " + \
           "--db-host 192.168.55.13 " + \
           "--db-port 3306 " + \
           "--db-pass $MYSQL_PASSWORD " + \
           "--db-user rancher " + \
           "--db-name rancher")
  }
}

Rancher uses port 9345 for HA communication. We need to expose this port. We also advertise an instance address using the host property of the node where provisioning is running.

Setup HAProxy

The final service we want to launch is HAProxy, which will load balance HTTP requests between the Rancher instances:

inlineInventory {
  // some line are skipped
  // see the complete script at the end of the article
}.provision {
  task name: 'run an haproxy load balancer', filter: {'role:haproxy'}, actions: {
    info "uploading haproxy configuration"
    directory sudo: true, target: '/etc/haproxy', mode: 600
    template(mode: 600, sudo: true) {
      source = 'templates/haproxy.cfg'
      target = '/etc/haproxy/haproxy.cfg'
      bindings = [
        port: 80,
        servers: [
          [name: 'rancher1', host: '192.168.55.11', port: 80],
          [name: 'rancher2', host: '192.168.55.12', port: 80]
        ]
      ]
    }

    info "launching haproxy instance"
    shell "docker run -d -v /etc/haproxy:/usr/local/etc/haproxy:ro -p 80:80 --name haproxy haproxy:1.7"

    info "waiting for haproxy is up and running"
    retry count: 10, delay: 5000, actions: {
      def response = httpGet url: "http://$node.host/ping"
      assert response.code == 200
    }
  }
}

To generate a HAProxy configuration, we use a template file. Please put it to the templates/haproxy.cfg near the provisioning script. Infrastructor utilizes Groovy's Simple Template Engine to process templates.

global
  daemon
  maxconn 256

defaults
  mode http
  timeout connect 5000ms
  timeout client 50000ms
  timeout server 50000ms

listen http-in
  bind *:$port
<% servers.each { out.println "  server $it.name $it.host:$it.port maxconn 32" } %>

Infrastructor will copy the HAProxy configuration to the node, launch an HAProxy instance, and will wait until it is up and running by periodically calling the /ping endpoint.

Putting It All Together

Now it is time to take a look at the final script:

inlineInventory {
  node (id: 'haproxy') {
     host = '192.168.55.10'
     username = 'ubuntu'
     keyfile = keypath('haproxy')  
     tags = [role: 'haproxy']
  }
  node (id: 'rancher-1') {
     host = '192.168.55.11'
     username = 'ubuntu'
     keyfile = keypath('rancher-1')  
     tags = [role: 'rancher']
  }
  node (id: 'rancher-1') {
     host = '192.168.55.12'
     username = 'ubuntu'
     keyfile = keypath('rancher-2')  
     tags = [role: 'rancher']
  }
  node (id: 'mysql') {
     host = '192.168.55.13'
     username = 'ubuntu'
     keyfile = keypath('haproxy')  
     tags = [role: 'haproxy']
  }
}.provision {
  task name: 'install docker on all hosts', parallel: 4, actions: {
    shell "sudo apt update"
    shell "sudo curl -Ssl https://get.docker.com | sh"
    shell "sudo usermod -aG docker ubuntu"
  }

  def MYSQL_PASSWORD = input message: "Enter MySQL password for rancher user: ", secret: true

  task name: 'run a mysql server', filter: {'role:mysql'}, actions: {
    info "launching a mysql instance"
    shell "docker rm -f rancher-storage || true"
    shell ("docker run -p 3306:3306 " + \
           "-d -e MYSQL_RANDOM_ROOT_PASSWORD=yes " + \
           "-e MYSQL_PASSWORD=$MYSQL_PASSWORD " + \
           "-e MYSQL_USER=rancher " + \
           "-e MYSQL_DATABASE=rancher " + \
           "--name rancher-storage mysql:5.6.37")

    info "wating for mysql instance is up and running"
    retry count: 10, delay: 2000, actions: {
      assert canConnectTo {
        port = 3306
        host = node.host
      }
    }
  }

  task name: 'run a couple of rancher servers', parallel: 2, filter: {'role:rancher'}, actions: {
    shell "docker rm -f rancher-server || true"
    shell ("docker run -d -p 80:8080 " + \
           "-p 9345:9345 --restart=always " + \
           "--name=rancher-server rancher/server:v1.6.4 " + \
           "--advertise-address ${node.host} " + \
           "--advertise-http-port 80 " + \
           "--db-host 192.168.55.13 " + \
           "--db-port 3306 " + \
           "--db-pass $MYSQL_PASSWORD " + \
           "--db-user rancher " + \
           "--db-name rancher")
  }

  task name: 'run an haproxy load balancer', filter: {'role:haproxy'}, actions: {
    info "uploading haproxy configuration"
    directory sudo: true, target: '/etc/haproxy', mode: 600
    template(mode: 600, sudo: true) {
      source = 'templates/haproxy.cfg'
      target = '/etc/haproxy/haproxy.cfg'
      bindings = [
        port: 80,
        servers: [
          [name: 'rancher1', host: '192.168.55.11', port: 80],
          [name: 'rancher2', host: '192.168.55.12', port: 80]
        ]
      ]
    }

    info "launching haproxy instance"
    shell "docker run -d -v /etc/haproxy:/usr/local/etc/haproxy:ro -p 80:80 --name haproxy haproxy:1.7"

    info "waiting for haproxy is up and running"
    retry count: 10, delay: 5000, actions: {
      def response = httpGet url: "http://$node.host/ping"
      assert response.code == 200
    }
  }
}

def keypath(def machine) { ".vagrant/machines/$machine/virtualbox/private_key" }

Save it in the same folder where Vagrantfile is and give it the name provision.groovy. Then, run it using the Infrastructor CLI:

infrastructor run -f provision.groovy

After the execution has been finished, we can open a URL: http://192.168.55.10 in a web browser to see Rancher Server UI. We can also check that it is running in HA mode:

Image title

Summary

To implement the provisioning script above, we use several important Infrastructor features:

  1. Inline Inventory definition - we define inventory in the same file where all tasks are defined.

  2. Parallel provisioning - we setup Docker on all nodes in parallel; same for both Rancher server instances.

  3. Node tags, task node filtering - we group nodes by tags: role:haproxy, role:mysql and role:rancher and apply different actions to each group.

  4. Secret input - DB password is requested during the provisioning.

  5. Retry function for health checks - we check TCP port connectivity and HTTP GET responses to see if a service is up and running.

The code examples can also be pulled from the GitHub repository.

For more information about Infrastructor, please visit the GitHub project page and wiki.

 

Learn about the importance of automated testing as part of a healthy DevOps practice, brought to you in partnership with Sauce Labs.

Topics:
docker ,rancher ,mysql ,devops ,containers

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}