A CI/CD Implementation for the Cloud Age
Join the DZone community and get the full member experience.Join For Free
drone, packer, ansible, docker... we associate a litany of names with continuous integration and continuous deployment. but when it comes to building a toolchain that seamlessly transitions our applications from a developer's editor to a running server, we often have to rely on our wits.
my team has invested heavily in building an awesome ci/cd system based entirely on top-shelf open source tools. here's what our solution looks like.
in the rest of this post, i'll explain how the model above fits our needs, and (at a conceptual level) how it works.
for our cloud services, we needed a general system that could take a git commit and move it through the process of compiling, testing, packaging, and releasing. following a more traditional model, we deploy continuously to a dev server, but our production releases are carefully timed with the other components of our total system (e.g. we update the cloud portion a few days before the mobile apps go live).
our development and production environments dictated some key facets of our deployment methodology:
- we store our code in bitbucket.
- our production servers are all hosted ec2 instances in aws (with load balancers and autoscaling).
- because our target release platform is linux, but many developers work on os x, it is imperative for us that we run pre-release integration tests in an environment that closely resembles production.
- releases should be zero-downtime and we should be able to roll back to previous versions at any time.
on these initial requirements, we began building a deployment environment.
a few false starts
we tried out a few methods that did not work out well for us. here are a few of the notable ones:
- elastic beanstalk: amazon's elastic beanstalk service is really cool, and we have used it for rapidly prototyping tools as well as launching internal services. but the finer points of deployment made the eb a poor fit for our production services.
- jenkins: we dove deeply into jenkins. but it was just too cumbersome for our needs. we needed better resource isolation, simpler configuration of basic tasks (like debian package installation), and a better place to store our valuable configuration data than in textareas on a web form.
- fabric: ultimately, we did decide to use python fabric for some of the steps in our process. but our initial attempt to build an entire ci/cd tool from fabric scripts just didn't work.
as you can see from the diagram above, we finally decided on a system that used the following components:
- boto (a python aws library)
- goose (a database migration manager)
at first, the list looks daunting, but most of these we just used as they are. for any given project, there's a
file to edit, some packer/ansible work, and possibly a
(fabric's equivalent of a makefile) to tie it all together.
we broke things down like this:
each project has its own
fabfile. the drone yaml file tells drone what to do with the project, and the fabric
fabfileserves the dual purpose of providing local builds and telling drone how to execute a remote deployment.
- projects that use an rdb also provide their own goose migration scripts (which are just sql ddl files.)
- a central devops codebase handles all of the rest of the code. notably, our ansible playbooks and packer configuration go there.
the big diagram above shows how code progresses through the system.
- first, a developer pushes to the bitbucket git repository.
listens to each repo for commits. when it receives a commit, it creates a new
container (according to our
.drone.ymlconfig) and fires off the build steps defined in
- we often have drone also create helper containers, like a postgresql database, so that we can run real integration tests, including goose database migrations and real tests of transactions.
- once tests have passed, drone will build debian packages (.deb files), version them, and deploy to our apt repository. while it was a pain to set up the first time, we love having every version of our code versioned and easily installable on any ubuntu linux system.
- the last thing drone does with this special docker container is kick off a packer build. packer's job is creating an ec2 ami (machine image) totally pre-configured with our application.
- packer spins up a builder running a fresh copy of ubuntu linux.
packer then runs its built-in
to provision our environment. it installs all of the security updates, creates some configuration files for us, and tunes the environment to handle our load. finally, ansible uses
apt-getto install the project's debian package.
once the machine image has deployed, we have a simple
script that creates several ec2 instances based on our shiny new ami. once these are up and running and passing healtchecks, the boto script does three things:
- add the new instances to the elb load balancer
- configure autoscaling to use our new ami
- once all health checks are passing, remove the older instances and just keep the new ones.
by that point, we have a running dev instance.
so how do we get this stuff to production? actually, it's easy. we just run the last three steps of the above, but for our production environment. after all, that process is tested each and every time we do git push, so we have a high degree of confidence that if it works for dev, it will work for prod. we have a straightforward fabric script for launching these deployments.
Published at DZone with permission of Matt Butcher, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.