For running ad-hoc commands across a small number of servers you really can’t beat Fabric. It requires nothing other than ssh installed on the servers, is generally just a one-line install and requires next to no syntaxtic fluff between you and the commands you want running. It’s much more of a swiss army knife to Capistranos bread knife.
I’ve found myself doing more and more EC2 work of late and have finally gotten around to making my life easier when using Fabric with Amazon instances. The result of a bit of hacking is Cloth (also available on PyPi). It contains some utility functions and a few handy tasks for loading host details from the EC2 API and using them in your Fabric tasks. No more static lists of host names that constantly need updating in your fabfile.
Specifically, with a fabfile that looks like:
#! /usr/bin/env python from cloth.tasks import *
You can run
fab all list
and get something like:
instance-name-1 (xx.xxx.xxx.xx, xxx.xx.xx.xx) instance-name-2 (xx.xxx.xxx.xx, xxx.xx.xx.xx) instance-name-3 (xx.xxx.xxx.xx, xxx.xx.xx.xx) instance-name-4 (xx.xxx.xxx.xx, xxx.xx.xx.xx) instance-name-5 (xx.xxx.xxx.xx, xxx.xx.xx.xx) instance-name-6 (xx.xxx.xxx.xx, xxx.xx.xx.xx) instance-name-7 (xx.xxx.xxx.xx, xxx.xx.xx.xx) instance-name-8 (xx.xxx.xxx.xx, xxx.xx.xx.xx) ...
And then you could run
fab -P all uptime
And you’d happily get the load averages and uptime for all your EC2 instances.
A few more tricks are documented in the GitHub README, including filtering the list by a regex and some convention based mapping to Fabric roles. I’ll hopefully add a few more features as I need them and generally tidy up a few things but I’m pretty happy with it so far.