Continuous Integration in the Cloud
This article is the third part of the Cloud Delivery Blueprints series. It discusses how to go from no cloud infrastructure and no continuous integration set up to having a functioning Deployment Pipeline in Amazon Web Services. It discusses high level topics while also providing a reference implementation so it’s easy to follow along with. You can read part one here and part two here.
What we’re going to do today:
• Identify your build steps and how you unit test
• Write build.sh to automate build and unit test process
• Commit to your source repo
• Make Commit Stage Go Green
So, in the first last post, we got your source into version control and you Jenkins server up and running. Now what? It’s time to start building the blocks for our pipeline. First up: setting up the Commit Stage of the pipeline.
What is the Commit Stage?
The commit stage is the first real stage of the pipeline, and the goal is to build artifacts for later stages in the pipeline, and run a fast suite of tests on the code to make sure that the code is ready to move on to those later stages. The key word is ‘fast’; your commit stage shouldn’t take longer than it takes for your developers to go to get a cup a coffee. Less than five minutes is ideal; over ten and you’re taking too long.
The different tasks that get done during the commit stage are:
• Compile or syntax check your code
• Run commit stage tests
• Static analysis of the code
• Package code into a distributable binary (if necessary)
• Preparation of any other artifacts needed by later stages of the pipeline (like test databases, or other quickly producible artifacts)
The “commit stage tests” is a bit of nebulous term, so let’s be specific: these are the fast-running unit tests that cover a decent amount of your code base. These two goals seem to work against each other: if the tests are fast, how can they be comprehensive? At the commit stage, we’re looking for unit-test style verification; these tests should focus on testing very specific units of code, and if rely heavily on mocking out different parts of the codebase. If you can avoid interacting with outside resources, like the file system or database, that’s ideal. It’s not necessary, though, as long as your tests are running quickly.
The other set of tests are going to be static analysis. Static analysis is the examining of the source code, looking for patterns that are likely to be faulty code, poor performing code, etc. Most languages have some sort of static analysis tools, and some specialize in looking for certain things. For example, for Ruby apps, Reek will look for code smells (like bad variable names or lack of comments) where Brakeman will look for specific Ruby on Rails security holes. Which static analysis tools make sense of your project is left as an exercise to the reader, but a quick Google search of “[language name] static analysis” will most likely give you more results than you care to dig through.
Not all languages require the code to be packaged, but for the ones that do, this is the step to do it in. For example, if you have a Java project, you’ll want to package your jars/wars/ears in this project, and make those artifacts available for later stages in the pipeline.
The last thing to do in the commit stage is to set up any other resources that future stages will need. These could be test databases that contain a subset of what a production database would look like, an archive of all the images used on the site, docker containers, etc.
The Reference Implementation’s Commit Stage
The reference implementation we’ve developed as a companion to this series comes with a commit stage build job, prepopulated with the configuration to pull down the code from a github repository and run the build script. If you’ve spun up a Jenkins server (directions to do that are in our last entry) and entered a custom github repo, you’ll see it in the configuration. If you didn’t, it’ll point to Stelligent’s CanaryBoard project.
If you look at the configuration of the commit stage job, there’s a couple important bits to check out:
• The source control repo: the defines the source code repository where your source code lives. It was defined a parameter when the Jenkins OpsWorks stack was created; by default, it points to Stelligent’s CanaryBoard repo.
• The build step: the build step is pretty simply, and calls the build.sh that exists in the source control repo.
If you pointed the Jenkins server to your source control repository, you’ll need to have a build.sh or equivalent file for the commit stage to call. Alternatively, you could just script out everything in the build step itself, but this is a bad practice for a couple reasons: It’s more difficult to get the build logic into source control and you are not able to build and test build logic tests locally. Also, when the build logic is a simple script that is part of the source repository, developers can run the build steps from their machine before committing.
Writing your own build.sh
Writing a build script is pretty easy. In fact, build scripts usually get written pretty early in a project, so you likely already have a build script for your project. However, your build script may not be called build.sh, and it may not even be an actual shell script.
Most languages have build tools that will do all the work that a commit stage is supposed to do. A good number of Java projects will use Maven to handle most of this work. A lot of Ruby projects use Rake and Bundler to handle most of these steps. (CanaryBoard uses these tools.) .NET projects use Visual Studio to build the code, and there are many third party tools to handle other commit stage activities.
Depending on how your project is set up, you’ll need to write your build.sh differently. You’ll need to figure out the different parts of your projects build cycle and then script them out. If you’re using build tools, you’ll need to figure out how you’re calling them and what each does.
In most cases, this information will be handy, even if you aren’t practicing continuous delivery yet. You are delivering the software at some point, so someone should know how to build and package your software. They should be able to provide you with the necessary build steps. Once you have those, it’s a matter of scripting them out and committing them to your repo.
Note: your build script doesn’t need to be written as a shell script. You can write your script in any language you like, and then either call it from a file named build.sh, or edit the commit stage configuration to call your script.
Regardless of how you write your build script, you’ll want to make sure you hit three main areas:
• Compile or syntax check the code
• Run your unit-level tests
• Conduct any static analysis of your code
Once your build.sh is complete and you’ve completed your local testing, commit it to the repository. The pipeline should detect the change and automatically kick off a build, automatically calling your build.sh file.
The commit stage if the first step of a deployment pipeline, and provides for quick feedback to developers that their code is working as intended. It builds, packages, and tests code, acting as a gate to later stages, and preparing items for those later stages. The key to a good commit stage is to have it provide as much valuable feedback as possible in a short amount of time; less than five minutes is ideal, and more than ten is too much.
In our Jenkins server, it looks for a file named build.sh that handles all of these items. It is part of the application’s source repo, so developers can also run it before they commit to make sure that they won’t be the ones breaking the build.
The commit stage is fast, but it’s testing is not comprehensive. In the next stage, Acceptance, we’ll talk about integration testing. We’ll also cover automated infrastructure provisioning, a key part of the Acceptance stage, as well as all other following stages.