Over a million developers have joined DZone.

Continuous Integration for a MEAN Application With Docker and Codeship

A detailed tutorial on automating your production servers to automatically update while working with the MEAN stack and Docker.

· DevOps Zone

The DevOps Zone is brought to you in partnership with Sonatype Nexus. The Nexus Suite helps scale your DevOps delivery with continuous component intelligence integrated into development tools, including Eclipse, IntelliJ, Jenkins, Bamboo, SonarQube and more. Schedule a demo today

In my previous post, I talked about deploying a MEAN web application in Docker containers on AWS. While the blog covered the steps to first set up and run a MEAN application, we need an automated process in place to update our production server(s) with the latest code as developers continue to work on the application.

Codeship provides a repository-driven infrastructure that can be used to trigger deployments based on code checkins. We can set up our project to run tests or deploy code based on the branch or tag name for the check-in.

In this post, I’ll spell out all the steps I took to set up a continuous integration system for automatic updates to my MEAN application, using Codeship’s Jet platform for Docker. Let’s start with setting up our project on Codeship.

Codeship Jet Setup

  1. Request a free trial of the Codeship Jet tool. Throughout the trail period, their team helped me successfully configure my project.
  2. Once your trial is set up, log in to Codeship to create a new project. If you don’t have any existing projects, you will see a button to create a new one. Otherwise, you can use the top left menu to select or create new projects.Welcome to Codeship
  3. Connect your project with your source code repository. I have my code on GitHub, so I clicked on the GitHub icon. Connect your SCM
  4. Enter the URL of your repository and click ConnectConnect your GitHub repository
  5. You should see a screen that lets you choose a Docker-based infrastructure. For existing Codeship users, you will only see this option if you have requested a Docker Jet trial. Select this option. That’s all we need to do set up our project successfully through the Codeship UI. Select your infrastructure

Create the Docker Images for Your Workflow

Our next step is to create the necessary Docker images that we’ll need in the workflow. Codeship already has images that can be used for AWS deployment with support for AWSCLI. But we’re free to use any Docker image that we need. I set up the following two images on Docker Hub that I use for my workflow, and I’ll elaborate on them in the next sections.


Set Up Your CI Workflow

Now that my repository is linked to my Codeship project, I can manage my workflow from my repository itself. All the information will be contained in two yaml files in the root folder of my repository:

  • codeship-services.yml
  • codeship-steps.yml


This file contains information about the Docker containers I need to spin up for my workflow. The format of this file is very similar to a docker-compose file. Each of the containers that I need to instantiate can be referenced as services that I run. Here’s what my file looks like:

   image: dreamerkumar/meanjs:latest
    image: dreamerkumar/docker-wordpress-nginx-ssh:latest

I essentially have two services with corresponding image files:

  • Web: For my workflow, I first need to run my tests. For this, I need a Docker image that I have set up on Docker Hub that has everything to run my application. More information on this setup can be found on my previous post.
  • Deploy: After I run my tests, I need another container that I will use to deploy the latest code to AWS. Since I will SSH into my AWS instance to update it, I picked up an image that has SSH installed.


This file contains the steps of the continuous delivery workflow. Each step contains a name and a service parameter. The command parameter can be used to specify the startup commands to run once the container is up and running.

- name: web
  service: webtc
  command: bash -c "mongod --fork --logpath /var/log/mongodb.log && cd Development/meanjs && git pull && NODE_ENV=test grunt test"
- name: deploy
  service: deploy
  command: ./update_meanjs_container_on_aws.sh

First, I spin up the web service containing the meanjs container. In the startup command, I start a bash session with bash -c.

We can specify multiple bash commands using && as the separator. I first run the mogod service to have a local MongoDB instance running on the default port 27017. My application will connect to this database. Next, I go to the source code directory, get the latest code from GitHub, and then fire the grunt command to run the tests.

Next, I spin up the deploy service. The image for it already has SSH installed. I also have the .pem key for authenticating against my AWS instance saved in the root directory of my image. Then, I have created a small script file update_meanjs_container_on_aws.sh that runs on startup of this container. It has the following contents:

ssh -i "meanjs_key.pem" ec2-user@ec2-52-207-226-163.compute-1.amazonaws.com ./run_updated_meanjs_container.sh

It’s just a one-line command to SSH into the instance and then run the script in a file: run_updated_meanjs_container.sh. This file is saved in the root directory of my AWS instance. It’s contents are as below:

echo "Inside the aws instance. Stopping the running meanjs container"
docker stop meanjs
echo "Removing the meanjs container"
docker rm meanjs
echo "Run new meanjs container and run node server within it"
docker run  --name meanjs --link mymongodb:db_1 -p 80:3000 -d dreamerkumar/meanjs:latest bash -c "cd /Development/meanjs; git pull; npm install; bower install; grunt build; NODE_ENV=production grunt"

I first bring down the site by stopping the currently running application container (meanjs). Then I also remove it using the rm command. Next, I fire up a new container. On startup of this container, I include all the commands to get the latest code, build my files, and finally run the grunt command to start the web instance.

The MongoDB container stays running as is, and the new meanjs container gets connected to it when instantiated. This way, I successfully switch my web server while still using the same database.

To recap my deploy step, I get Codeship to run a Docker container with necessary instructions to SSH into my AWS instance. Once the connection to the AWS instance is established, I run a script to replace the current meanjs container with a new one. The new container then builds the latest code and then runs the node server inside it.

If I had to deploy the same code to multiple web servers, then it would probably be a better idea to have one step to create a Docker image artifact built with the latest code. Then each AWS instance can just run that latest image to save time on the code build steps. But I have only one server in my workflow so having these steps on spinup of the single container is still good.

Running the CI Build

My workflow is set up to run tests and deploy on check-in to the master branch. So I can validate my build by making a small change to my code and then pushing it to GitHub: git push -u origin master

When I log in to Codeship, I see that the project is already queued to run.

Web step logs

After clicking on the project link, we can see the ongoing status of the build. On the left nav are the list of services. Below that is the list of steps. Clicking on any step will show the complete logs for that step. The following screenshot shows the steps for my web step with the logs for my test runs:

Web step logs

Similarly, I can see the deploy step logs by selecting it on the left nav.

Deploy step logs

The green checkmark on each step indicates build success. This means that my code got successfully deployed to my AWS instance.


Having a repository-driven infrastructure helps in automating our testing and deployment workflows, without needing any involvement from the individual developers. This way, the developers can focus on writing the application code while the infrastructure works behind the scenes to perform necessary tasks based on developer intent.

Related Refcardz:

The DevOps Zone is brought to you in partnership with Sonatype Nexus. Use the Nexus Suite to automate your software supply chain and ensure you're using the highest quality open source components at every step of the development lifecycle. Get Nexus today

devops ,continuous integration ,docker

Published at DZone with permission of Vishal Kumar, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}