Setting Up Ansible/SSH for Cassandra Clusters (Part 1)
Want to bring DevOps to the database? Consider Ansible and its variety of playbooks. This guide walks you through getting Ansible set up for a Cassandra Cluster.
Join the DZone community and get the full member experience.
Join For FreeAnsible and ssh are essential DevOps/DBA tools for common DBA/DevOps tasks like managing backups, rolling upgrades to the Cassandra cluster in AWS/EC2, and so much more. An excellent aspect of Ansible is that it uses ssh, so you do not have to install an agent to use Ansible.
This article series centers on DevOps/DBA tasks with Cassandra. However, the use of Ansible for DevOps/DBA transcends its use with the Cassandra database, so this article is good for any DevOps specialists, DBAs, or developers that need to manage groups of instances, boxes, or hosts, whether they be on-prem bare-metal, dev boxes, or in the AWS cloud. You don’t need to set up Cassandra to get use from this article.
In this part of the Cassandra cluster tutorial, we will set up Ansible for our Cassandra cluster to automate common DevOps/DBA tasks. As part of this setup, we will create an ssh key and then set up our instances with this key so we can use ssh
, scp
, and ansible
. For now, we will set up a bastion server with Vagrant (following tutorials do this with AWS CloudFormation). To add the keys to the Cassandra nodes, we will create and use an ansible
playbook (ssh-addkey.yml
) from Vagrant.
Cassandra Tutorials: Tutorial Series on DevOps/DBA Cassandra Database
The first article in this Cassandra cluster tutorial series was about setting up a Cassandra cluster with Vagrant (also appeared on DZone with some additional content Cassandra tutorial: DZone Setting up a Cassandra Cluster with Vagrant. The second article in this series was about setting up SSL for a Cassandra cluster using Vagrant (which also appeared with more content as Cassandra tutorial: DZone Setting up a Cassandra Cluster with SSL). You don’t need those tutorials to follow along, but they might provide a lot of context. You can find the source for the first and second article at our Cloudurable Cassandra Image for Docker, AWS, and Vagrant. In later articles, we will use Ansible to create more complicated playbooks, like doing a rolling Cassandra upgrade, and we will cover using Ansible/ssh with AWS EC2.
Source Code for Vagrant and Ansible to Create Cassandra Clusters
We continue to evolve the cassandra-image GitHub project. In an effort for the code to match the listings in the article, we created a new branch where the code was when this article was written (more or less): Cassandra Tutorial 3: Ansible Cassandra Vagrant source code.
Where Do You Go if You Have a Problem or Get Stuck?
We set up a Google group for this project and set of articles. If you just can’t get something to work or you are getting an error message, please report it here. Between the mailing list and the GitHub issues, we can support you through quite a few questions and challenges. You can also find new articles in this series by following Cloudurable at our LinkedIn page, Facebook page, Google plus, or Twitter.
Let’s get to it. Let’s start by creating an ssh identity key for our DevOps/DBA test Cassandra cluster.
Create a Key for Test Cluster
To use Ansible, we will need to set up ssh keys
, as Ansible uses ssh
instead of running an agent on each server like Chef and Puppet.
The tool ssh-keygen manages authentication keys for ssh (secure shell). The utility ssh-keygen generates RSA or DSA keys for SSH (secure shell) protocol version 1 and 2. You can specify the key type with the -t option.
Setup key script bin/setupkeys-cassandra-security.sh:
CLUSTER_NAME=test
...
ssh-keygen -t rsa -C "your_email@example.com" -N "" -C "setup for cloud" \
-f "$PWD/resources/server/certs/${CLUSTER_NAME}_rsa"
chmod 400 "$PWD/resources/server/certs/"*
cp "$PWD/resources/server/certs/"* ~/.ssh
...
Let’s break that down.
We use ssh-keygen to create a private key that we will use to log into our boxes.
In this tutorial, those boxes are Vagrant boxes (VirtualBox), but in the next article, we will use the same key for managing EC2 instances.
Use ssh-keygen to create private key for ssh:
ssh-keygen -t rsa -C "your_email@example.com" -N "" -C "setup for cloud" \
-f "$PWD/resources/server/certs/${CLUSTER_NAME}_rsa"
Then we restrict the access to the file of the key. Otherwise, Ansible, ssh
, and scp
(secure copy) will not let us use it.
Change the access of the ssh key:
chmod 400 "$PWD/resources/server/certs/"*
The above chmod 400
changes the cert files so only the owner can read the file. This file change mod makes sense. The certification file should be private to the user (and that is what 400
does).
Copy keys to area where it will be copied by provisioning:
cp "$PWD/resources/server/certs/"* ~/.ssh
The above just puts the files where our provisioners (Packer and Vagrant) can pick them up and deploy them with the image.
Locally, we are using Vagrant to launch a cluster to do some tests on our laptop.
We also use Packer and AWS command line tools to create EC2 AMIs (and Docker images), but we don’t cover AWS in this article (it is in the very next tutorial, which is right after the second part of this one).
Create a Bastion Server
Eventually, we would like to use a bastion server that is on a public subnet to send commands to our Cassandra database nodes that are in a private subnet in EC2 (we cover this in a later tutorial with Cassandra and AWS). For local testing, we set up a bastion server, which is well explained in this guide to Vagrant and Ansible.
We found learning Ansible with Vagrant (Part 2⁄4) as a guide for some of the setup performed in this tutorial. SystemAdminCasts is a reliable source of Ansible and Vagrant knowledge for DevOps/DBA. Their mgmt node corresponds to what we call a bastion server in this tutorial. A notable difference is we are using CentOS 7, not Ubuntu, and we made some slight syntax updates to some of the Ansible commands that we are using (we use a later version of Ansible).
We added a bastion server to our Vagrant config as follows:
Vagrantfile to set up the bastion host for our Cassandra Cluster:
# Define Bastion Node
config.vm.define "bastion" do |node|
node.vm.network "private_network", ip: "192.168.50.20"
node.vm.provider "virtualbox" do |vb|
vb.memory = "256"
vb.cpus = 1
end
node.vm.provision "shell", inline: <<-SHELL
yum install -y epel-release
yum update -y
yum install -y ansible
mkdir /home/vagrant/resources
cp -r /vagrant/resources/* /home/vagrant/resources/
mkdir -p ~/resources
cp -r /vagrant/resources/* ~/resources/
mkdir -p /home/vagrant/.ssh/
cp /vagrant/resources/server/certs/* /home/vagrant/.ssh/
sudo /vagrant/scripts/002-hosts.sh
ssh-keyscan node0 node1 node2 >> /home/vagrant/.ssh/known_hosts
mkdir ~/playbooks
cp -r /vagrant/playbooks/* ~/playbooks/
sudo cp /vagrant/resources/home/inventory.ini /etc/ansible/hosts
chown -R vagrant:vagrant /home/vagrant
SHELL
The bastion server which could be on a public subnet in AWS in a VPC uses the ssh-keyscan
to add nodes that we setup in the host file into /home/vagrant/.ssh/known_hosts
(we cover this very example later with AWS/Cassandra).
Running ssh-keyscan to add Cassandra servers to known_hosts for ssh:
ssh-keyscan node0 node1 node2 >> /home/vagrant/.ssh/known_hosts
This utility is to get around the problem of needing to verify nodes, and getting this error message: The authenticity of host ... can't be established. ... Are you sure you want to continue connecting (yes/no)? no
when we are trying to run ansible command line tools.
Modify the Vagrant Provision Script for Cassandra Nodes
Since we are using provision files to create different types of images (Docker, EC2 AMI, Vagrant/VirtualBox), then we use a provisioning script unique to vagrant.
In this vagrant provision script, we call another provision script to setup a hosts file.
000-vagrant-provision.sh:
mkdir -p /home/vagrant/.ssh/
cp /vagrant/resources/server/certs/* /home/vagrant/.ssh/
...
scripts/002-hosts.sh
echo RUNNING TUNE OS
Setting Up sshd on our Cassandra Database Nodes
The provision script 002-hosts.sh
configures /etc/ssh/sshd_config/sshd_config
to allow public key auth. Then it restarts the daemon for ssh communication sshd
. (The other provisioning scripts it invokes were covered in the first two articles).
Let’s look at the 002-hosts.sh
provision script. You can see some remnants from the last article where we set up csqlsh
, and then it gets to business setting up sshd
(secure server shell daemon).
scripts/002-hosts.sh — sets up sshd and hosts file so we can manage Cassandra cluster with ansible:
#!/bin/bash
set -e
## Copy csqlshrc file that controls csqlsh connections to ~/.cassandra/cqlshrc.
mkdir ~/.cassandra
cp ~/resources/home/.cassandra/cqlshrc ~/.cassandra/cqlshrc
## Allow pub key login to ssh.
sed -ie 's/#PubkeyAuthentication no/PubkeyAuthentication yes/g' /etc/ssh/sshd_config
## System control restart sshd daemon to take sshd_config into effect.
systemctl restart sshd
# Create host file so it is easier to ssh from box to box
cat >> /etc/hosts <<EOL
192.168.50.20 bastion
192.168.50.4 node0
192.168.50.5 node1
192.168.50.6 node2
192.168.50.7 node3
192.168.50.8 node4
192.168.50.9 node5
EOL
This setup is somewhat particular to our Vagrant setup at this point. To simplify access to the servers that hold the different Cassandra Database nodes, the 002-hosts.sh
creates an \etc\hosts\
file on the bastion server.
With our certification keys added to sshd config
and our hosts configured (and our inventory.ini
file shipped), we can start using ansible from our bastion server.
This discussion reminds me, we did not talk about the ansible inventory.ini
file (Ansible inventory documentation).
Ansible config on bastion for Cassandra Database Cluster
Ansible has an ansible.cfg
file, and an inventory.ini
file. When you run ansible
, it checks for ansible.cfg
in the current working directory, then your home directory, and then for a master config file (/etc/ansible
). We created an inventory.ini
file that lives under ~\github\cassandra-image\resources\home
, which gets mapped to \vagrant\resources\home
on the virtual machines (node0, bastion, node1, and node2). A provision script moves the inventory.ini
file to its proper location (sudo cp /vagrant/resources/home/inventory.ini /etc/ansible/hosts
).
The inventory.ini
contains servers that you want to manage with Ansible. A couple of things are going on here — we have a bastion
g group, this is for our bastion
server and we have the nodes
group, and it is made up of node0
, node1
, and node2
.
Let’s see what the inventory.ini
file looks like.
Inventory.ini that is copied to Ansible master list on Bastion:
[bastion]
bastion
[nodes]
node0
node1
node2
Once we provision our cluster, we can log into bastion
and start executing ansible commands.
Installing Cert Keys to Test on All Nodes Using an Ansible Playbook
To make this happen, we had to tell the other servers about our certification keys.
We did this with an Ansible playbook as follows:
Ansible playbook getting invoked from Vagrant on each new Cassandra Database node:
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
# Define Cassandra Nodes
(0..numCassandraNodes-1).each do |i|
port_number = i + 4
ip_address = "192.168.50.#{port_number}"
seed_addresses = "192.168.50.4,192.168.50.5,192.168.50.6"
config.vm.define "node#{i}" do |node|
node.vm.network "private_network", ip: ip_address
node.vm.provider "virtualbox" do |vb|
vb.memory = "2048"
vb.cpus = 4
end
...
node.vm.provision "ansible" do |ansible|
ansible.playbook = "playbooks/ssh-addkey.yml"
end
end
end
Notice the line node.vm.provision "ansible" do |ansible|
as well as ansible.playbook = "playbooks/ssh-addkey.yml"
.
If you are new to Vagrant and the above just is not making sense, please watch this Vagrant Crash Course. It is by the same folks (guy) who created the Ansible series.
Ansible playbooks are like configuration playbooks. You can perform tons of operations that are essential for DevOps (like yum installing software, specific tasks to Cassandra, etc.). We will use Ansible playbooks throughout the Cassandra Cluster tutorial series.
Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce or a set of steps in a general IT process. –Ansible Playbook documentation.
Here is the Ansible playbook to add the RSA public key to the Cassandra nodes as follows.
Ansible playbook ssh-addkey.yml to add test_rsa.pub to all Cassandra Database node servers:
---
- hosts: all
become: true
gather_facts: no
remote_user: vagrant
tasks:
- name: install ssh key
authorized_key: user=vagrant
key="{{ lookup('file', '../resources/server/certs/test_rsa.pub') }}"
state=present
The trick here is that Vagrant supports running Ansible playbooks as well.
The Vagrant Ansible provisioner allows you to provision the guest using Ansible playbooks by executing ansible-playbook from the Vagrant host. –Vagrant Ansible documentation
Part 1: Conclusion
We set up Ansible for our Cassandra cluster to do automate common DevOps/DBA tasks. We created an ssh key and then set up our instances with this key so we could use ssh
, scp
, and ansible
. We set up a bastion server with Vagrant. We used ansible
playbook (ssh-addkey.yml
) from Vagrant to install our test
cluster key on each server.
Next Up in Part 2
We ran ansible ping
against a single server. We ran ansible ping
against many servers (nodes
). We set up our local dev machine with ansible.cfg
and inventory.ini
so we could use ansible
commands direct to node0
and nodes
.
We ran nodetool describecluster
against all of the nodes from our dev machine. Lastly, we created a very simple playbook that can run nodetool describecluster
. Ansible is a very powerful tool that can help you manage a cluster of Cassandra instances. In later articles, we will use Ansible to create more complex playbooks, like backing up Cassandra nodes to S3.
In the Next Cassandra Cluster Tutorial, We Cover AWS Cassandra
The next tutorial picks up where this one left off, and it includes coverage of AWS Cassandra, cloud DevOps and using Packer, Ansible/SSH, and AWS command line tools to create and manage EC2 Cassandra instances in AWS with Ansible. This next tutorial is a continuation of this one and is useful for developers and DevOps/DBA staff who want to create AWS AMI images and manage those EC2 instances with Ansible.
The AWS Cassandra tutorial covers:
- Creating images (EC2 AMIs) with Packer
- Using Packer from Ansible to provision an image (AWS AMI)
- Installing systemd services that depend on other services and will auto-restart on failure
- AWS command line tools to launch an EC2 instance
- Setting up ansible to manage our EC2 instance (ansible uses ssh)
- Setting up a
ssh-agent
and adding ssh identities (ssh-add
) - Setting ssh using
~/.ssh/config
so we don’t have to pass credentials around - Using ansible dynamic inventory with EC2
- AWS command line tools to manage DNS entries with Route 53
If you are doing DevOps with AWS, Ansible dynamic inventory management with EC2 is awesome. Also mastering ssh config is a must. You should also master the AWS command line tools to automate common tasks. This next article explores all of those topics.
Helpful Links
- Cool lists of things you can do with Cassandra and Ansible
- Learning Ansible with Vagrant Training Video
- Source code for this article
- The first tutorial in this Cassandra tutorial series was about setting up a Cassandra cluster with Vagrant
- First tutorial on DZone with some additional content DZone Cassandra Cluster Tutorial 1: DevOps/DBA Setting up a Cassandra Database Cluster with Vagrant
- The second tutorial in this series: setting up SSL for a Cassandra cluster using Vagrant
- Second tutorial on DZone with additional content: DZone Cassandra Cluster Tutorial 2: DevOps/DBA Setting up a Cassandra Database Cluster with SSL
- Github project: Cloudurable Cassandra Database Image for Docker, AWS and Vagrant
- Cassandra Course and Cassandra Consulting
- AWS Cassandra Database DevOps/DBA Support
About Cloudurable Cassandra Support
Cloudurable provides cassandra support, cassandra consulting, and cassandra training, as well as Cassandra examples like AWS CloudFormation templates, Packer, ansible to do common cassandra DBA and cassandra DevOps tasks. We also provide monitoring tools and images (AMI/Docker) to support Cassandra in production running in EC2. Our advanced Cassandra courses teaches how one could develop, support and deploy Cassandra to production in AWS EC2 and is geared towards DevOps, architects and DBAs.
Opinions expressed by DZone contributors are their own.
Comments