DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

Related

  • Optimizing Machine Learning Deployment: Tips and Tricks
  • What To Know Before Implementing IIoT
  • Security Architecture Review on a SASE Solution
  • SRE vs AWS DevOps: A Personal Experience Comparison

Trending

  • Best Practices for Writing Clean Java Code
  • Modular Software Architecture: Advantages and Disadvantages of Using Monolith, Microservices and Modular Monolith
  • Running End-To-End Tests in GitHub Actions
  • Lost in Communication and Collaboration
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. A CI/CD Implementation for the Cloud Age

A CI/CD Implementation for the Cloud Age

Matt Butcher user avatar by
Matt Butcher
·
Oct. 23, 14 · Interview
Like (0)
Save
Tweet
Share
11.30K Views

Join the DZone community and get the full member experience.

Join For Free

drone, packer, ansible, docker... we associate a litany of names with continuous integration and continuous deployment. but when it comes to building a toolchain that seamlessly transitions our applications from a developer's editor to a running server, we often have to rely on our wits.

my team has invested heavily in building an awesome ci/cd system based entirely on top-shelf open source tools. here's what our solution looks like.

in the rest of this post, i'll explain how the model above fits our needs, and (at a conceptual level) how it works.

our needs

for our cloud services, we needed a general system that could take a git commit and move it through the process of compiling, testing, packaging, and releasing. following a more traditional model, we deploy continuously to a dev server, but our production releases are carefully timed with the other components of our total system (e.g. we update the cloud portion a few days before the mobile apps go live).

our development and production environments dictated some key facets of our deployment methodology:

  • we store our code in bitbucket.
  • our production servers are all hosted ec2 instances in aws (with load balancers and autoscaling).
  • because our target release platform is linux, but many developers work on os x, it is imperative for us that we run pre-release integration tests in an environment that closely resembles production.
  • releases should be zero-downtime and we should be able to roll back to previous versions at any time.

on these initial requirements, we began building a deployment environment.

a few false starts

we tried out a few methods that did not work out well for us. here are a few of the notable ones:

  • elastic beanstalk: amazon's elastic beanstalk service is really cool, and we have used it for rapidly prototyping tools as well as launching internal services. but the finer points of deployment made the eb a poor fit for our production services.
  • jenkins: we dove deeply into jenkins. but it was just too cumbersome for our needs. we needed better resource isolation, simpler configuration of basic tasks (like debian package installation), and a better place to store our valuable configuration data than in textareas on a web form.
  • fabric: ultimately, we did decide to use python fabric for some of the steps in our process. but our initial attempt to build an entire ci/cd tool from fabric scripts just didn't work.

our solution

as you can see from the diagram above, we finally decided on a system that used the following components:

  • bitbucket
  • drone
  • docker
  • packer
  • ansible
  • fabric
  • boto (a python aws library)
  • goose (a database migration manager)

at first, the list looks daunting, but most of these we just used as they are. for any given project, there's a .drone.yaml file to edit, some packer/ansible work, and possibly a fabfile (fabric's equivalent of a makefile) to tie it all together.

we broke things down like this:

  • each project has its own .drone.yml and fabfile . the drone yaml file tells drone what to do with the project, and the fabric fabfile serves the dual purpose of providing local builds and telling drone how to execute a remote deployment.
  • projects that use an rdb also provide their own goose migration scripts (which are just sql ddl files.)
  • a central devops codebase handles all of the rest of the code. notably, our ansible playbooks and packer configuration go there.

the big diagram above shows how code progresses through the system.

  1. first, a developer pushes to the bitbucket git repository.
  2. drone listens to each repo for commits. when it receives a commit, it creates a new docker container (according to our .drone.yml config) and fires off the build steps defined in .drone.yml .
  3. we often have drone also create helper containers, like a postgresql database, so that we can run real integration tests, including goose database migrations and real tests of transactions.
  4. once tests have passed, drone will build debian packages (.deb files), version them, and deploy to our apt repository. while it was a pain to set up the first time, we love having every version of our code versioned and easily installable on any ubuntu linux system.
  5. the last thing drone does with this special docker container is kick off a packer build. packer's job is creating an ec2 ami (machine image) totally pre-configured with our application.
  6. packer spins up a builder running a fresh copy of ubuntu linux.
  7. packer then runs its built-in ansible to provision our environment. it installs all of the security updates, creates some configuration files for us, and tunes the environment to handle our load. finally, ansible uses apt-get to install the project's debian package.
  8. once the machine image has deployed, we have a simple boto script that creates several ec2 instances based on our shiny new ami. once these are up and running and passing healtchecks, the boto script does three things:
    • add the new instances to the elb load balancer
    • configure autoscaling to use our new ami
    • once all health checks are passing, remove the older instances and just keep the new ones.

by that point, we have a running dev instance.

production deployment

so how do we get this stuff to production? actually, it's easy. we just run the last three steps of the above, but for our production environment. after all, that process is tested each and every time we do git push, so we have a high degree of confidence that if it works for dev, it will work for prod. we have a straightforward fabric script for launching these deployments.

Continuous Integration/Deployment Cloud Production (computer science) Implementation

Published at DZone with permission of Matt Butcher, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Optimizing Machine Learning Deployment: Tips and Tricks
  • What To Know Before Implementing IIoT
  • Security Architecture Review on a SASE Solution
  • SRE vs AWS DevOps: A Personal Experience Comparison

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: