Last year, team Electric Cloud participated in the first annual DockerCon Hackathon, and won as one of the top-three submissions. This year, Nikhil and I returned to a bigger and badder hackathon event, evidence of Docker’s massive growth.
How it Works
40+ teams of 1-10 hackers spent 24 hours working on a project from scratch. Categories for submission:
- Build, Ship and Run Cool Apps with Docker
- Management & Operations: Logging, Monitoring, UI / Kitematic, Developer tools, Deployment, CI / CD, Stats, etc
- Orchestration: Composition, Scheduling, Clustering, Service discovery, high Availability, Load Balancing, etc
- Security, Compliance, & Governance: Authorization, Provenance, Distribution, etc
- Resources: networking, storage API, etc
Everyone submitted a 2-minute video, and 10 teams were selected to present. Of those presenting, the judges selected the top 3 as winners.
Electric Cloud exists to help people deliver better software faster. We wanted to show how Docker fits in with other tools in the software delivery ecosystem. Being experts in our own software, we decided to use Electric Cloud products to tie everything together – accelerating end-to-end Continuous Delivery, using:
- ElectricFlow – an orchestration tool that acts as the single pane of glass from commit through production
- ElectricAccelerator – an acceleration tool that dramatically speeds up builds and tests by distributing them across a cluster of CPUs
Last year’s entry focused on the Build stage of a continuous delivery pipeline. This year, we focused on the Integration stage.
We built a deployment process that:
- Dynamically spins up a VM on either EC2 or OpenStack
- Runs Docker Bench for security tests
- Retrieves artifacts from Bintray and Docker Hub
- Stands up linked MySQL and Wildfly containers running the application
- Runs Selenium tests distributed across a cluster
- Pushes some statistics to a Dashing dashboard
- Then automatically tears down the VM if the tests are successful.
The deployment process and the various technologies involved
A LOT to accomplish in 24 hours! but we were up for the task – and with less-than-pretty version of this diagram chicken-scratched on a piece of paper, we got to work!
What We Built
We chose a sample web application called The Heat Clinic because it has a couple of moving parts (application server and database) making it a somewhat realistic example. We started out by building the Continuous Delivery pipeline.
The continuous delivery pipeline defined in ElectricFlow
For this hackathon, we focused on the Integration stage. Still, it’s important to know what the pipeline is – to make sure the automation pieces are reusable, and knowing how they’d be reused is key. Having kept this in mind, everything we built can be plugged in to Production (or any other stage) with minimal effort.
The next step was modeling the application. The Heat Clinic application has two tiers, one for the web application and one for the database. Each of those tiers has a few different components (artifacts) – the Wildfly/MySQL containers from Docker Hub, the WAR file for the web application, configuration files, SQL initialization scripts, etc. We defined the tiers, the components, and the processes to deploy or undeploy each of those components.
The application model defined in ElectricFlow
Next, we defined the deployment process that coordinates everything. This process is closely aligned with the diagram shown earlier: spin up the dynamic environment, run the security tests, retrieve all the artifacts, stand up the containers (in the right order), run the Selenium tests, and tear down the environment if everything is successful.
The deployment process defined in ElectricFlow
The Selenium suite we put together took a long time to run, and we realized this is not uncommon for Selenium. So we sped up the Selenium test suite by using ElectricAccelerator. By distributing the 101 tests across just two 4-core VMs, Accelerator used its patented secret sauce to parallelize and run the tests on the individual cores, bringing the overall time down from >27 minutes to <4 minutes. That’s 7 times faster with just 2 machines! If we were to add more VMs to our cluster, we could bring that time down to <30 seconds. That’s a whopping 60 times faster!
Visualizing how ElectricAccelerator distributed the Selenium tests across a cluster
Finally, we put a pretty face on our work by pushing some key stats to Dashing – typically displayed on a TV screen so everyone has an “at a glance” view of the health of the system.
Dashing dashboard showing key statistics
While we did not win this time around, we did come out with a very cool story and a working set of integrations highlighting Docker in the context of Continuous Delivery. Here are the pain points we looked to address:
- You’re looking at Docker but need to tie it together with a bunch of existing tools
- You’re looking to increase your velocity by implementing Continuous Delivery & Continuous Testing
- You need to gather and surface critical stats for your applications
- You want to make sure you’re auditing for security at the earliest possible stage
- You want to run your long-running integration tests early and often
Check out the entire flow in this short 3 minutes video we included in our submission:
We’re already looking forward to the DockerCon Hackathon next year. It will be interesting to see what the rapidly changing Docker landscape looks like by then!
How to Integrate Docker as Part of Your CD Pipeline?
Container technology like Docker promises to provide version-able, environment-independent application services in a snap. However, the tasks and tools involved in creating, validating, promoting and delivering Docker containers into production environments are many, complex and time-consuming.
To learn more on how to successfully incorporate Docker as part of your end-to-end Continuous Delivery pipeline, I invite you to join my colleague Nikhil Vaze and myself on an upcoming webinar, when we’ll be discussing: