DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Auto Remediation of GuardDuty Findings for a Compromised ECS Cluster in AWSVPC Network Mode
  • Different CPU Times: Unix/Linux ‘top’
  • Ansible and the Pre-Container Arts
  • Overcoming the Art Challenges of Staying Ahead in Software Development

Trending

  • Agentic AI for Automated Application Security and Vulnerability Management
  • MySQL to PostgreSQL Database Migration: A Practical Case Study
  • Unlocking Data with Language: Real-World Applications of Text-to-SQL Interfaces
  • DGS GraphQL and Spring Boot
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Caylent's 12-Step Guide to Managing Docker Swarm Using Ansible

Caylent's 12-Step Guide to Managing Docker Swarm Using Ansible

This tutorial walks you through an Ansible playbook establishing and managing a high-availability cluster in Docker Swarm with the help of Ansible.

By 
Julia Pearson user avatar
Julia Pearson
·
Updated Feb. 07, 18 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
9.3K Views

Join the DZone community and get the full member experience.

Join For Free

Want to standardize and automate your whole Docker Swarm management? Feeling inspired by our article Creating a High-Availability Docker Swarm on AWS and now wish to reduce the repetition every time you need a swarm? Well, we’ve got you covered with the help of Ansible. In the following article, we’ll lead you through an Ansible playbook that installs a fresh version of Docker, establishes a node as your Docker Swarm manager, and adds additional managers and workers to your swarm until you have a high-availability cluster. Furthermore, the process creates a default Docker network to enable the nodes to communicate properly.

Caylent's Ansible/Docker Swarm Playbook Guide

1. Setup Prerequisites

Here’s a link to the hosts file and playbook on GitHub which you need for Ansible to build out a Docker Swarm: Caylent/Ansible-Docker-Swarm

On top of this, for the sake of this article, you’ll need to have a few other things already in place:

  • A network of connected machines (i.e. a virtual private cloud (VPC))
  • At least 2 public subnets
  • 5 EC2 instances on AWS with an elastic load balancer (ELB)

Set these ingress rules on your EC2 security groups:

  • HTTP port 80 from 0.0.0.0\0
  • SSH from 0.0.0.0\0 (for increased security, replace this with your own IP address)

Once your machines are configured correctly we can begin. We’re using Ubuntu 14.04 LTS, though the process will work similarly on other Linux-based operating systems too.

2. Assign Nodes

Before diving into the rest of the tasks that will install Docker and start up the swarm, it’s necessary to detail the specifications of the nodes outlined on AWS. The image below shows the hosts file which specifies the nodes needed to create managers and workers and specifies the role each node will undertake.Fill in the IP addresses accordingly to assign your nodes, replacing {{ manager->ip }} with your intended master node IPs and {{ worker->ip}} with your intended worker ones. If you prefer to have 5 masters you can fill those in continuing with the format shown for the first 3 and same with additional workers. As we’ve mentioned before, it’s important to always create an odd number of masters in your Swarm; as a ‘majority’ vote is needed between managers to define the lead instance. This works in accordance with Docker’s

[docker-manager-first] manager1 ansible_host="{{manager1->ip}}" [docker-managers] manager2 ansible_host="{{manager2->ip}}" manager3 ansible_host="{{manager3->ip}}" [docker-workers] worker1 ansible_host="{{worker1->ip}}" worker2 ansible_host="{{worker2->ip}}" [docker-api] manager1 manager2 manager3 [docker-cloud] manager1 manager2 manager3

 3. Customize Variables

You will also need to customize the group variables in group_vars/all.yml to reflect your own SSH username and the path to your private key.

ansible_ssh_user: {{ssh-username}} ansible_ssh_private_key_file: "{{~/path/to/your/ssh_private_key}}" ansible_host_key_checking: false

Now that you’ve supplied Ansible with the necessary information to access your nodes, you can run the playbook. Each set of commands will automatically loop until completed, meaning very little input is required on your part—with the exception of some copy and pasting.To run the playbook, enter the following command in the root folder of the cloned repo:

$ ansible-playbook docker-ce.yaml -i hosts.ini

4. Install Ubuntu and Docker

The following code operates on all hosts; both managers and workers.

hosts: all remote_user: root become: yes become_method: sudo tasks:

The following tasks check that no previous installs of Docker exist on your nodes.

name: "add docker repository" apt_repository: repo='deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable' state=present when: ansible_distribution == "Ubuntu" name: "ensure old versions of docker are purged 1" apt: name=lxc-docker state=absent purge=yes when: ansible_distribution == "Ubuntu" name: "ensure old versions of docker are purged 2" apt: name=docker state=absent purge=yes when: ansible_distribution == "Ubuntu" name: "ensure old versions of docker are purged 3" apt: name=docker-engine state=absent purge=yes when: ansible_distribution == "Ubuntu" name: "ensure old versions of docker are purged 4" apt: name=docker.io state=absent purge=yes when: ansible_distribution == "Ubuntu"

The following tasks check the current kernel version and then downloads dependencies for Ubuntu 14.04.

name: "get kernel version" shell: uname -r register: kernel name: "install 14.04 pre-req 1" apt: name: linux-image-extra-{{ kernel.stdout }} state: present update_cache: yes install_recommends: yes when: ansible_distribution == "Ubuntu" retries: 3 delay: 20 name: "install 14.04 pre-req 2" apt: name=linux-image-extra-virtual state=present update_cache=yes install_recommends=yes when: ansible_distribution == "Ubuntu" retries: 3 delay: 20

The following task installs your preferred Docker version. Our example input is '17.06.2*’.

name: "install docker" apt: name=docker-ce=17.06.2* state=present update_cache=yes install_recommends=yes allow_unauthenticated=yes when: ansible_distribution == "Ubuntu" retries: 3 delay: 20

5. Create Docker Group

The following tasks create a Docker group, add Ubuntu as a user, and restart the service so that we don’t need to use sudo every time we use a Docker command.

name: "add docker group" group: name=docker state=present name: "add ubuntu to docker group" user: name=ubuntu groups=docker append=yes name: "restart docker service" service: name=docker state=started name: "get docker info" shell: docker info register: docker_info changed_when: false

6. Initiate the Swarm

The following tasks will run on the first manager as specified above in the hosts file.

hosts: docker-manager-first remote_user: root become: yes become_method: sudo tasks:

The following tasks initiate a Docker Swarm and then save the master and worker tokens so we can add more hosts to the cluster.

name: "create primary swarm manager" shell: docker swarm init --advertise-addr {{ ansible_eth0['ipv4']['address'] }} when: "docker_info.stdout.find('Swarm: inactive') != -1" name: "get docker swarm manager token" shell: docker swarm join-token -q manager register: manager_token name: "get docker swarm worker token" shell: docker swarm join-token -q worker register: worker_token

7. Add Managers to the Swarm

The following tasks run on all nodes designated as ‘docker-managers’ in the hosts file, adding each to the swarm as managers.

hosts: docker-managers remote_user: root become: yes become_method: sudo tasks: name: "join as a manager" shell: "docker swarm join --token {{ hostvars['manager1']['manager_token']['stdout'] }} {{ hostvars['manager1']['ansible_eth0']['ipv4']['address'] }}:2377" when: docker_info.stdout.find("Swarm{{':'}} inactive") != -1 retries: 3 delay: 20

8. Add Workers to the Swarm

The following tasks add all nodes designated as ‘docker-workers’ in the hosts file to your swarm as workers.

hosts: docker-workers remote_user: root become: yes become_method: sudo tasks: name: "join as a worker" shell: "docker swarm join --token {{ hostvars['manager1']['worker_token']['stdout'] }} {{ hostvars['manager1']['ansible_eth0']['ipv4']['address'] }}:2377" when: "docker_info.stdout.find('Swarm: inactive') != -1" retries: 3 delay: 20

9. Expose Docker API

The following tasks will run on all manager nodes that were previously designated in the hosts file under.

‘docker-api’

The commands confirm that the API is running on these nodes and is exposed. If not, the task stops Docker, exposes it, and restarts the service.

hosts: docker-api remote_user: root become: yes become_method: sudo tasks: name: "confirm service exists" stat: path=/etc/init.d/docker register: service_wrapper name: "check whether api already exposed" command: "grep 'DOCKER_OPTS=\"-D -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock\"' /etc/default/docker" register: check_has_api always_run: True ignore_errors: True changed_when: False name: "stop docker" service: name=docker state=stopped when: service_wrapper.stat.exists check_has_api.stdout == "" register: service_stopped name: "expose docker api" lineinfile: "dest=/etc/default/docker state=present regexp='#DOCKER_OPTS=' line='DOCKER_OPTS=\"-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock\"'" when: ervice_stopped check_has_api.stdout == "" name: "restart docker service" service: name=docker state=started when: service_wrapper.stat.exists check_has_api.stdout == ""

10. Create Daemon File

The following tasks will run on all the manager nodes which were designated in the hosts file under ‘docker-cloud’ and confirms that there is a Daemon file on each. If not, it stops Docker, creates a daemon file, and restarts the service.

hosts: docker-cloud remote_user: root become: yes become_method: sudo tasks: name: "confirm service exists" stat: path=/etc/init.d/docker register: service_wrapper name: "check for daemon file" stat: path=/etc/docker/daemon.json register: daemon_file name: "stop docker" service: name=docker state=stopped when: service_wrapper.stat.exists not daemon_file.stat.exists register: service_stopped name: "create daemon file" template: src=templates/daemon.j2 dest=/etc/docker/daemon.json when: not daemon_file.stat.exists name: "restart docker service" service: name=docker state=started when: service_wrapper.stat.exists

11. List Networks

The following task lists the networks on your Docker manager. This output will determine if the default network still needs to be created.

hosts: docker-manager-first remote_user: root become: yes become_method: sudo tasks: name: list networks shell: docker network ls register: docker_networks

12. Expand Networks

After checking if the default network exists, Ansible will work through the list of networks provided and create each one with the specified subnet, gateway, and appropriate name. This network allows containers within this service to communicate properly.

name: create network when not there shell: docker network create --driver overlay --subnet {{ item.subnet }} --gateway {{ item.gateway }} {{ item.name }} with_items: {name: 'caylent-default', subnet: '17.0.0.0/16', gateway: '17.0.0.1'} when: docker_networks.stdout.find( item.name ) == -1

Congratulations, you’re all finished and the entire process is now automated! As always, we'd love your feedback and suggestions for future articles.

Docker (software) Ansible (software) Host (Unix) Task (computing) Network

Published at DZone with permission of Julia Pearson. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Auto Remediation of GuardDuty Findings for a Compromised ECS Cluster in AWSVPC Network Mode
  • Different CPU Times: Unix/Linux ‘top’
  • Ansible and the Pre-Container Arts
  • Overcoming the Art Challenges of Staying Ahead in Software Development

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: