Over a million developers have joined DZone.

DevOps Intro for Developers: Part II

I explain how to build data centers with two Racks per data center, with each Rack connected by a Gateway, and each data center connected by a Router.

· DevOps Zone

The DevOps zone is brought to you in partnership with Sonatype Nexus. The Nexus suite helps scale your DevOps delivery with continuous component intelligence integrated into development tools, including Eclipse, IntelliJ, Jenkins, Bamboo, SonarQube and more. Schedule a demo today

This is in continuation of a previous article. I’m going to start from scratch.

I’m going to build:

  1. Two data centers with name DC1 and DC2 by creating two different Vagrant VM networks.
  2. Two Racks per data center, say DC1-RC1, DC1-RC2, and DC2-RC1, DC2-RC2.
  3. Each Rack will be connected by a Gateway.
  4. Each data center will be connected by a Router.
  5. Finally, Open VPN to connect both data centers.

distributedsystemarch1

All the hardware node and device cooking are mostly done via shell scripts and Ruby and Vagrant coding.

Whoever is interested should first understand the basics of networking, Ruby, ShellScripting, and Vagrant and Docker environments.

Before moving ahead, I need a simple utility to generate an IP address range for the given CIDR.

I wrote a basic code in Ruby that generates that:

# Generate IP's in given Range
# IpList = Nodemanager.convert_ip_range('192.168.1.2', '192.168.1.20')

module Nodemanager

    # Generates range of ips from start to end. Assumption is that i'm only using IPv4 address

  def convertIPrange first, last
    first, last = [first, last].map{|s| s.split(".").inject(0){|i, s| i = 256 * i + s.to_i}}
    (first..last).map do |q|
      a = []
      (q, r = q.divmod(256)) && a.unshift(r) until q.zero?
      a.join(".")
    end
  end
end

Now, I need to load all dependencies in by Berksfile. Berksfile is like a dependency manager for Chef (a provisioning tool).

It can be compared with Maven and Gradle (Java), Nuget (Dotnet), Composer (PHP), Bundler (Ruby), or Grunt and Gulp (NodeJS).

name             'basedatacenter'
maintainer       'Ashwin Rayaprolu'
maintainer_email 'ashwin.rayaprolu@gmail.com'
license          'All rights reserved'
description      'Installs/Configures Distributed Workplace'
long_description 'Installs/Configures Distributed Workplace'
version          '1.0.0'


depends 'apt', '~> 2.9'
depends 'firewall', '~> 2.4'
depends 'apache2', '~> 3.2.2'
depends 'mysql', '~> 8.0'  
depends 'mysql2_chef_gem', '~> 1.0'
depends 'database', '~> 5.1'  
depends 'java', '~> 1.42.0'
depends 'users', '~> 3.0.0'
depends 'tarball'

Before moving ahead, I want to list my base environment. I have two host machines: one on CentOS 7 and the other one on CentOS 6.


uname -r   ==   3.10.0-327.22.2.el7.x86_64 
vboxmanage --version == 5.1.2r108956 
berks --version == 4.3.5 
vagrant --version == Vagrant 1.8.5 
ruby --version == ruby 2.3.1p112 (2016-04-26 revision 54768) [x86_64-linux] 
vagrant plugin list 
vagrant-berkshelf (5.0.0) 
    vagrant-hostmanager (1.8.5) 
    vagrant-omnibus (1.5.0) 
    vagrant-share (1.1.5, system) 

Now, let me write a basic Vagrant file to start my VMs:

# -*- mode: ruby -*-
# vi: set ft=ruby :

require './modules/Nodemanager.rb'

include Nodemanager

@IPAddressNodeHash = Hash.new {|h,k| h[k] = Array.new }
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = '2'

Vagrant.require_version '>= 1.5.0'

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|


  # Create Share for us to Share some files
  config.vm.synced_folder "share/", "/usr/devenv/share/", disabled: false
  # Disable Default Vagrant Share
  config.vm.synced_folder ".", "/vagrant", disabled: true

  # Setup resource requirements
  config.vm.provider "virtualbox" do |v|
    v.memory = 2048
    v.cpus = 2
  end

  # vagrant plugin install vagrant-hostmanager
  config.hostmanager.enabled = false
  config.hostmanager.manage_host = false
  config.hostmanager.manage_guest = true
  config.hostmanager.ignore_private_ip = false
  config.hostmanager.include_offline = true

  # NOTE: You will need to install the vagrant-omnibus plugin:
  #
  #   $ vagrant plugin install vagrant-omnibus
  #
  if Vagrant.has_plugin?("vagrant-omnibus")
    config.omnibus.chef_version = '12.13.37'
  end

  config.vm.box = 'bento/ubuntu-16.04'
  config.vm.network :private_network, type: 'dhcp'
  config.berkshelf.enabled = true



  # Assumes that the Vagrantfile is in the root of our
  # Chef repository.
  root_dir = File.dirname(File.expand_path(__FILE__))

  # Assumes that the node definitions are in the nodes
  # subfolder
  nodetypes = Dir[File.join(root_dir,'nodes','*.json')]

  ipindex = 0
  # Iterate over each of the JSON files
  nodetypes.each do |file|
    puts "parsing #{file}"
        node_json = JSON.parse(File.read(file))

        # Only process the node if it has a vagrant section
        if(node_json["vagrant"])
          @IPAddressNodeHash[node_json["vagrant"]["name"]] = Nodemanager.convertIPrange(node_json["vagrant"]["start_ip"], node_json["vagrant"]["end_ip"])


          1.upto(node_json["NumberOfNodes"]) do |nodeIndex| 

            ipindex = ipindex + 1

            # Allow us to remove certain items from the run_list if we're
            # using vagrant. Useful for things like networking configuration
            # which may not apply.
            if exclusions = node_json["vagrant"]["exclusions"]
              exclusions.each do |exclusion|
                if node_json["run_list"].delete(exclusion)
                  puts "removed #{exclusion} from the run list"
                end
              end
            end

            vagrant_name = node_json["vagrant"]["name"] + "-#{nodeIndex}"
            is_public = node_json["vagrant"]["is_public"]
            #vagrant_ip = node_json["vagrant"]["ip"]
            vagrant_ip = @IPAddressNodeHash[node_json["vagrant"]["name"]][nodeIndex-1]
            config.vm.define vagrant_name, autostart: true  do |vagrant|

              vagrant.vm.hostname = vagrant_name 
              puts  "Working with host #{vagrant_name} with IP : #{vagrant_ip}" 

              # Only use private networking if we specified an
              # IP. Otherwise fallback to DHCP
              # IP/28 is CIDR
              if vagrant_ip
                vagrant.vm.network :private_network, ip: vagrant_ip,  :netmask => "255.255.255.240"
              end

              if is_public
                config.vm.network "public_network", type: "dhcp", bridge: "em1"
              end


              # hostmanager provisioner
              config.vm.provision :hostmanager

              vagrant.vm.provision :chef_solo do |chef|
                chef.data_bags_path = "data_bags"
                chef.json = node_json          
              end        


            end  # End of VM Config

          end # End of node interation on count
        end  #End of vagrant found
      end # End of each node type file



end

Finally, run Vagrant. A sample output is attached below.

I’m creating two VMs for two Racks and one VM for Gateway. There are now three VMs up and running. Two VMs represent our two virtual Racks and the third is a Gateway. All of them are running on a private IP network that is inaccessible from the external world except our gateway node.

Our gateway node has two different ethernet devices: one connecting the private network and the other connecting the host network. I’ve marked specific lines that define the kind of network that gets created.

              # Only use private networking if we specified an
              # IP. Otherwise fallback to DHCP
              # IP/28 is CIDR
              if vagrant_ip
                vagrant.vm.network :private_network, ip: vagrant_ip,  :netmask => "255.255.255.240"
              end

              if is_public
                config.vm.network "public_network", type: "dhcp", bridge: "em1"
              end

Sample output on Vagrant up:

VagrantUpOutput.jpg

I define node configuration in a JSON file so as to make it more simple. Attached is a sample node type JSON for both Gateway node and Rack node.

Below is the definition for Rack. I tried to add as many comments as possible to explain each field.

If you observe the below node definitions, I’ve given the node name prefix in the config file and also to and from ranges for the IPs in the config file. Apart from that, I define the kind of recipe that needs to be loaded by Chef for this specific node type.

// This is JSON definition used to create Virtual Racks
{
  "NumberOfNodes":2,
  "environment":"production",
  "authorization": {
    "sudo": {
      // the deploy user specifically gets sudo rights
      // if you're using vagrant it's worth adding "vagrant"
      // to this array
      // The password for the dpeloy user is set in data_bags/users/deploy.json
      // and should be generated using:
      // openssl passwd -1 "plaintextpassword"
      "users": ["deploy", "vagrant"]
    }
  },
  // See http://www.talkingquickly.co.uk/2014/08/auto-generate-vagrant-machines-from-chef-node-definitions/ for more on this
  "vagrant" : {
    "exclusions" : [],
    "name" : "dc1-rc",
    "ip" : "192.168.1.2",
    "start_ip":"192.168.1.2",
    "end_ip":"192.168.1.3"
  },
  "mysql": {
      "server_root_password": "rootpass",
      "server_debian_password": "debpass",
      "server_repl_password": "replpass"
  },
  "data_bags_path":"data_bags",
  "run_list":
  [
    "recipe[basedatacenter::platform]",
    "recipe[basedatacenter::users]",
    "recipe[basedatacenter::docker]"

  ]
}

Below is the node definition for Gateway:

// Sample JSON definition to create Gateway Node.
// If you observe this one also has public IP along with private
{
  "NumberOfNodes":1,
  "environment":"production",
  "authorization": {
    "sudo": {
      // the deploy user specifically gets sudo rights
      // if you're using vagrant it's worth adding "vagrant"
      // to this array
      // The password for the dpeloy user is set in data_bags/users/deploy.json
      // and should be generated using:
      // openssl passwd -1 "plaintextpassword"
      "users": ["deploy", "vagrant"]
    }
  },
  // See http://www.talkingquickly.co.uk/2014/08/auto-generate-vagrant-machines-from-chef-node-definitions/ for more on this
  "vagrant" : {
    "exclusions" : [],
    "name" : "dc1-gw",
    "ip" : "192.168.1.5",
    "start_ip":"192.168.1.4",
    "end_ip":"192.168.1.4",
    "is_public":true
  },
  "mysql": {
      "server_root_password": "rootpass",
      "server_debian_password": "debpass",
      "server_repl_password": "replpass"
  },
  "data_bags_path":"data_bags",
  "run_list":
  [
    "recipe[basedatacenter::platform]"
  ]
}

 Before moving on to the next step, I need to install five nodes on each rack. This is taken care of by Docker.

When creating a distributed environment, the recommended replication factor is three. When I add failover scenarios to the above statement, my recommendation is five nodes with a replication factor of three.

If we can spread these five nodes across different power points and across different Racks, that would give one of the best uptime scenarios.

Docker is a containerization tool that mimics VM but is very lightweight. We are using Docker containers to mimic real world nodes.

# The below script will be run on each rack of each data center
# This would install docker environment
apt-get install -y curl &&
apt-get install  -y  apt-transport-https ca-certificates &&
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D &&
touch /etc/apt/sources.list.d/docker.list &&
echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" >> /etc/apt/sources.list.d/docker.list  &&
apt-get update &&
apt-get purge lxc-docker &&
apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual &&
apt-get update &&
apt-get install -y docker-engine &&
curl -L https://github.com/docker/machine/releases/download/v0.7.0/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine && 
chmod +x /usr/local/bin/docker-machine &&
curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose &&
chmod +x /usr/local/bin/docker-compose &&
sudo usermod -aG docker docker

Once Docker is set up on all racks, we need to install all nodes. Below is a base version of the Docker file that I use. My next step is to setup containers on each of the Rack so that we can replicate multiple data centers and multiple Rack scenarios.

I’m going to create five containers on each Rack and each one of the containers will again be using Ubuntu Xenial as base OS. I’m going to install oracle 7 JDK on all of them.

My use case for distributed architecture is based on the HDFS, Cassandra set up, therefore I need to install Java first. The below install.sh script is run by Vagrant and Chef to install Docker on each of the Racks.

# Below dockerfile is used to setup all docker boxes on each rack.
# Would update further as and when i add more services
FROM ubuntu:16.04
MAINTAINER Ashwin Rayaprolu

RUN apt-get update
RUN apt-get dist-upgrade -y

RUN DEBIAN_FRONTEND=noninteractive apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python-software-properties
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install software-properties-common
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install byobu curl git htop man unzip vim wget

# Install Java.
RUN \
  echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
  add-apt-repository -y ppa:webupd8team/java && \
  apt-get update && \
  apt-get install -y oracle-java7-installer && \
  rm -rf /var/lib/apt/lists/* && \
  rm -rf /var/cache/oracle-jdk7-installer


# Install InetUtils for Ping/traceroute/ifconfig
RUN apt-get update
# For Ifconfig and other commands
RUN apt-get install -y net-tools
# For ping command
RUN apt-get install -y iputils-ping 
# For Traceroute
RUN apt-get install -y inetutils-traceroute



# Define working directory.
WORKDIR /data

# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-7-oracle

# Define default command.
CMD ["bash"]
FROM ubuntu:16.04
MAINTAINER Ashwin Rayaprolu

RUN apt-get update
RUN apt-get dist-upgrade -y

RUN DEBIAN_FRONTEND=noninteractive apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python-software-properties
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install software-properties-common
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install byobu curl git htop man unzip vim wget

# Install Java.
RUN \
  echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
  add-apt-repository -y ppa:webupd8team/java && \
  apt-get update && \
  apt-get install -y oracle-java7-installer && \
  rm -rf /var/lib/apt/lists/* && \
  rm -rf /var/cache/oracle-jdk7-installer


# Install InetUtils for Ping/traceroute/ifconfig
RUN apt-get update
# For Ifconfig and other commands
RUN apt-get install -y net-tools
# For ping command
RUN apt-get install -y iputils-ping
# For Traceroute
RUN apt-get install -y inetutils-traceroute



# Define working directory.
WORKDIR /data

# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-7-oracle

# Define default command.
CMD ["bash"]
# Below dockerfile is used to setup all docker boxes on each rack.
# Would update further as and when i add more services
FROM ubuntu:16.04
MAINTAINER Ashwin Rayaprolu

RUN apt-get update
RUN apt-get dist-upgrade -y

RUN DEBIAN_FRONTEND=noninteractive apt-get -y dist-upgrade
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python-software-properties
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install software-properties-common
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install byobu curl git htop man unzip vim wget

# Install Java.
RUN \
  echo oracle-java7-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
  add-apt-repository -y ppa:webupd8team/java && \
  apt-get update && \
  apt-get install -y oracle-java7-installer && \
  rm -rf /var/lib/apt/lists/* && \
  rm -rf /var/cache/oracle-jdk7-installer


# Install InetUtils for Ping/traceroute/ifconfig
RUN apt-get update
# For Ifconfig and other commands
RUN apt-get install -y net-tools
# For ping command
RUN apt-get install -y iputils-ping 
# For Traceroute
RUN apt-get install -y inetutils-traceroute



# Define working directory.
WORKDIR /data

# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-7-oracle

# Define default command.
CMD ["bash"]

Node Network on 10.18.1.2/28

We have multiple options to create networks in Docker. I would like to go with bridge networking. Will discuss on those specific topics later. For now, assuming we are using the bridge network, below is code to create network and attach to some container

We need to make sure we have a different range of networks on each rack and each data center so that we don’t overlap IPs between different Racks and data centers.

 # Below command will create a network in our desired range (dc1-rack1) 
 # 10.18.1.0 to 10.18.1.15 
 docker network create -d bridge \ 
 --subnet=10.18.0.0/16 \ 
 --gateway=10.18.1.1 \ 
 --ip-range=10.18.1.2/28 \ 
 my-multihost-network 

 # Below command will create a network in our desired range (dc1-rack2) 
 # From 10.18.1.16 to 10.18.1.31 
 docker network create -d bridge \ 
 --subnet=10.18.0.0/16 \ 
 --gateway=10.18.1.1 \ 
 --ip-range=10.18.1.19/28 \ 
 my-multihost-network 

 # Below command will create a network in our desired range (dc2-rack1) 
 # 10.18.1.32 to 10.18.1.47 
docker network create -d bridge \ 
 --subnet=10.18.0.0/16 \ 
 --gateway=10.18.1.1 \ 
 --ip-range=10.18.1.36/28 \ 
 my-multihost-network 

 # Below command will create a network in our desired range (dc2-rack2) 
 # 10.18.1.48 to 10.18.1.63 
 docker network create -d bridge \ 
 --subnet=10.18.0.0/16 \ 
 --gateway=10.18.1.1 \ 
 --ip-range=10.18.1.55/28 \ 
 my-multihost-network 

 # -d option to run in background -t option to get a duplicate tty 
 docker run -itd multinode_node1 

 # Connect the newly created network on each node to the node name. 
 docker network connect my-multihost-network docker_node_name 

I will write code to automate all the of above tasks in subsequent articles. I’m going to use Docker-compose to build individual nodes in each rack.

Very basic code would look like this:

version: ‘2’
   services:
       node1:
          build: node1/
       node2:
          image: node2/
       node3:
          image: node3/


Source code has been hosted at

https://github.com/ashwinrayaprolu1984/distributed-workplace



The DevOps zone is brought to you in partnership with Sonatype Nexus. Use the Nexus Suite to automate your software supply chain and ensure you're using the highest quality open source components at every step of the development lifecycle. Get Nexus today

Topics:
devops ,docker ,docker compose ,vagrant ,chef ,linux

Published at DZone with permission of Ashwin Rayaprolu, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}