Over a million developers have joined DZone.

Running a MEAN App in Docker Containers on AWS

Learn how to run MongoDB in a separate container from the web application. There are lots of benefits to this approach of having isolated environments

· Cloud Zone

Download the Essential Cloud Buyer’s Guide to learn important factors to consider before selecting a provider as well as buying criteria to help you make the best decision for your infrastructure needs, brought to you in partnership with Internap.

The rate of adoption of Docker as a containerized solution is soaring. A lot of companies are now using Docker containers to run apps. In a lot of scenarios, using Docker containers can be a better approach than spinning up a full-blown virtual machine.

In this post, I’ll break down all the steps I took to successfully install and run a web application built on the MEAN stack (MongoDB, Express, AngularJS, and Node.js). I hosted the application in Docker containers on Amazon Web Services (AWS).

Also, I ran the MongoDB database and the web application in separate containers. There are lots of benefits to this approach of having isolated environments:

  1. Since each container has its own runtime environment, it’s easy to modify the environment of one application component without affecting other parts. We can change the installed software or try out different versions of the softwares, until we figure out the best possible setup for that specific component.
  2. Since our application components are isolated, security issues are easy to deal with. If a container is attacked or a malicious script ends up being inadvertently run as part of an update, our other containers are still safe.
  3. Since it is easy to switch out and change the connected containers, testing becomes a lot easier. For example, if we want to test our web application with different sets of data, we can do that easily by connecting it to different containers set up for different database environments.

MEAN Web Framework

The web application that we’re going to run is the framework code for MEAN.JS. This full-stack JavaScript solution builds fast, robust, and maintainable production web applications using MongoDB, Express, AngularJS, and Node.js.

Another great advantage of MEAN.JS is that we can use Yeoman Generators to create the scaffolding for our application in minutes. It also has CRUD generators which I have used heavily when adding new features to the application. The best part is that it is already well set up to support Docker deployment. It comes with a Dockerfile that can be built to create the container image, although we will use a prebuilt image to do it even faster (more on this later).

Running Docker on an Amazon Instance

You might already be aware that you can use basic AWS services free for a full year. The following steps will walk you through how to configure and run a virtual machine on AWS along with Docker service:

  1. To begin, create your free account on aws.amazon.com. Make sure to choose the Basic (Free) support plan. You will be redirected the AWS welcome page.
  2. On this page, click “Launch Management Console.” Please bear with me as we will be clicking a lot of Launch buttons before we actually launch the instance. You will be redirected to a page that lists a plethora of AWS services. Launch Management Console
  3. Click the first option on the top left called “EC2 (Virtual Servers in the Cloud).” EC2
  4. Click “Launch Instance.” You will be prompted to select an image. I chose the first one, “Amazon Linux AMI,” but if you are more comfortable with any other Linux version, go ahead and choose that one. Select
  5. Click the Launch button.
  6. Next, you will be asked to choose an Instance Type. You can select the first option. The top nav bar displays all the steps in choosing our configuration, but the only custom configuration I needed to do was bump up the storage space for my instance. My first attempt with the default 8 GB failed as my Docker images needed more space to run, especially the MongoDB image that I chose which has a default setting of more than 3 GB to store its journal.
  7. To change the storage, click the fourth option, “Add Storage.” Add Storage
  8. Then change the storage space from 8 GB to 16 GB and click “Review and Launch.” You will be taken to a final Review screen. Review and Launch
  9. Click “Launch” again. You will then be prompted to create a key pair that will be used to SSH into the instance after it is created. I like this being part of the flow as connecting to the instance will obviously be the first thing I would want to do after the instance is launched. Launch
  10. Choose “Create a new key pair” from the drop down and then type in a filename.
  11. Finally click “Download Key Pair” to store this file on your computer to a folder. We will need to reference this file when we SSH into our instance. Download Key Pair
  12. Ok, so for the last time, click “Launch Instance” button. Your instance should be up and running in a few minutes. You will see a “View Instance” link to go to the EC2 dashboard where you can see the details of the running instance.
  13. If you select your instance in the list and then click “Connect” at the top, there are some clear instructions to SSH into the box. In the following screenshot, they recommend using PuTTY, but I used my “Git Bash” application. It was already available on my box and supports the SSH client. Connect to Instance

    Note: I will SSH through my Mac Terminal for the purpose of this blog, but I have successfully been able to do the same through a Windows box before. It’s not much different.

  14. From the Terminal, navigate to the folder where you saved the Key pair file. Run the below commands to change the permission of the file as instructed (see above screenshot) and then SSH into the AWS instance:
    MacBook-Pro:Documents vishalkumar$ chmod 400 meanjs_app_on_AWS.pem 
    MacBook-Pro:Documents vishalkumar$ SSH -i meanjs_app_on_AWS.pem 
    The authenticity of host ' (' can't be established. RSA key fingerprint is ef:23:dc:8d:73:93:1e:d4:90:f7:4e:9b:50:67:f5:1b. 
    Are you sure you want to continue connecting (yes/no)? yes 
    Warning: Permanently added '' (RSA) to the list of known hosts. 
    Run "sudo yum update" to apply all updates. 
    [ec2-user@ip-172-31-40-40 ~]$
  15. Following the recommendation that we get after connecting, run the yum update command:
    [ec2-user@ip-172-31-40-40 ~]$ sudo yum update
  16. Once the update installs, install Docker:
    [ec2-user@ip-172-31-40-40~]$ sudo yum install -y docker
  17. Start the Docker service:
    [ec2-user@ip-172-31-40-40 ~]$ sudo service docker start 
    Starting cgconfig service: [ OK ] 
    Starting docker: [ OK ] 
    [ec2-user@ip-172-31-40-40 ~]$
  18. Add the ec2-user to the Docker group so you can execute Docker commands without using sudo:
    [ec2-user@ip-172-31-40-40 ~]$ sudo usermod -a -G docker ec2-user
  19. Finally, log out and SSH back in. Run “Docker Info” to see if the command is successful without using sudo. You should see some stats returned by Docker:
    [ec2-user@ip-172-31-40-40 ~]$ logout
    Connection to closed. 
    MacBook-Pro:Documents vishalkumar$ SSH -i meanjs_app_on_AWS.pem 
    [ec2-user@ip-172-31-40-40 ~]$ docker info 
    Containers: 0 Images: 0 

Running MongoDB Database as a Container

Now that we have Docker running on our Amazon instance, we can go ahead and run our containers.

As I mentioned before, we’re going to run our MongoDB database and our web application on separate containers. I chose the official repo for Mongo on the docker repository. We can pull this image and run it as a detached container in one simple step:

[ec2-user@ip-172-31-40-40 ~]$ docker run --name mymongodb -d mongo

The last argument mongo is the name of the image from which it should create the container. Docker will first search for this image locally. When it doesn’t find it, it will go ahead and download it and all the base images that it is dependent on. Convenient!

Docker will then run this image as a container. -d flag ensures that it is run in detached mode (in the background) so that we can use this same shell to run our other commands. We can do a quick check after this to make sure that this container is up and running by using the docker ps command:

[ec2-user@ip-172-31-40-40 ~]$ docker ps -a 
2f93a31d4a3d mongo:latest "/entrypoint.sh mong About a minute ago Up About a minute 
27017/tcp mymongodb

The startup script script for this image already runs the mongo service listening on 27017 port by default. So there is literally nothing else we had to do here except for that one docker run command.

Running the MEAN Stack Container

The next phase of this project is to run our web application as a separate container.

The MEAN stack code base has a lot of dependencies like Node, Bower, Grunt, etc. But once again, we don’t need to worry about installing them if we have an image that already has all these dependencies. Turns out there is an image on the Docker Hub that already has everything we need.

Once again, we will pull it in and run it with just one command:

[ec2-user@ip-172-31-40-40 ~]$ docker run -i -t --name mymeanjs --link mymongodb:db_1 -p 80:3000 maccam912/meanjs:latest bash 
Status: Downloaded newer image for maccam912/meanjs:latest 

Now there is a lot going on with this single command. To be honest, it took me some time to get it exactly right.

  1. The most important piece here is the –link mymongodb:db_1 argument. It adds a link between this container and our mymongodb container. This way, our web application is able to connect to the database running on the mymongodb container. db_1 is the alias name that we’re choosing to reference this connected container. Our MEAN application is set to use db_1, so it’s important to keep that name.
  2. Another important argument is -p 80:3000, where we’re mapping the 3000 port on the container to port 80 on the host machine. We know that web applications are accessed through the default port of 80 using the HTTP protocol. Our MEAN application is set to run on port 3000. This mapping enables us to access the same application from outside the container over the host port 80.
  3. We of course have to specify the image from which the container should be built. As we discussed before, maccam912/meanjs:latest is the image we’ll use for this container.
  4. The -i flag is for interactive mode, and -t is to allocate a pseudo terminal. This will essentially allow us to connect our terminal with the stdin and stdout streams of the container. This stackoverflow question explains it in a little more detail.
  5. The argument bash hooks us into the container where we will run the required commands to get our MEAN application running. We can bash into a previously running Docker container, but here we are doing all that with just one command.

New Call-to-action

Building and Running our MEAN Application

Now that we’re inside our container, running the ls command shows us many folders including one called Development. We will use this folder for our source code.

cd into this folder and run git clone to get the source code for our MEAN.JS application from GitHub:

root@7f4e72af1cf0:/# cd Development/ 
root@7f4e72af1cf0:/Development# git clone https://github.com/meanjs/mean.git meanjs 
Cloning into 'meanjs'... remote: 
Checking connectivity... done.

cd into our MEAN.JS folder. We can run npm install to download all the package dependencies:

root@7f4e72af1cf0:/Development# cd meanjs 
root@7f4e72af1cf0:/Development/meanjs# ls 
Dockerfile LICENSE.md Procfile README.md app bower.json config fig.yml gruntfile.js karma.conf.js package.json public scripts server.js 
root@7f4e72af1cf0:/Development/meanjs# npm install

A couple of hiccups to watch out for: For some reason, my npm install hung during a download. So I used Ctrl + C to terminate it, deleted all packages to start from scratch, and ran npm install again. Thankfully, this time it worked:

root@7f4e72af1cf0:/Development/meanjs# rm -rf node_modules/ 
root@7f4e72af1cf0:/Development/meanjs# npm install

Install the front-end dependencies running by running bower. Since I’m logged in as the super user, bower doesn’t like it. But it does give me an option to still run it by using the –allow-root option:

root@7f4e72af1cf0:/Development/meanjs# bower install 
bower ESUDO Cannot be run with sudo 
You can however run a command with sudo using --allow-root option
root@7f4e72af1cf0:/Development/meanjs# bower install --allow-root

Run our grunt task to run the linter and minimize the js and css files:

root@7f4e72af1cf0:/Development/meanjs# grunt build 
Done, without errors.

Now, we are ready to run our application. Our MEAN stacks looks for a configuration flag called NODE_ENV, which we will set to production and use the default grunt task to run our application. If you did all the steps right, you should see this final output:

root@7f4e72af1cf0:/Development/meanjs# NODE_ENV=production grunt 
MEAN.JS application started 
Environment: production 
Port: 3000 
Database: mongodb://

Validating Our Application from the Browser

Our application would have given errors if there was some problem running it or if the database connection failed. Since everything looks good, it’s time to finally access our web application through the browser.

But first, we’ll need to expose our virtual machine’s port 80 over HTTP.

  1. Go back to the EC2 dashboard.
  2. Click on the security group link for the given instance. You should
    see the settings page for the security group. launch_wizard_3
  3. Click the “Inbound” tab at the bottom, and then click the “Edit”
    link. You should see that SSH is already added. Now we need to add
    HTTP to the list of inbound rules. Edit
  4. Click “Add Rule.” Add Rule
  5. Select HTTP from the dropdown menu and leave the default setting of
    port 80 for the Port Range field. Click “Save.”
  6. Pick up the URL to our instance from the Public DNS column and hit
    that URL from the browser. Instances You should see the homepage of our
    fabulous application. You can validate it by creating some user
    accounts and signing in to the app. Congrats

So that’s it. We’ve managed to run our application on AWS inside isolated Docker containers. There were a lot of steps involved, but at the crux of it all, we really needed to run only two smart Docker run commands to containerize our application.

The Cloud Zone is brought to you in partnership with Internap. Read Bare-Metal Cloud 101 to learn about bare-metal cloud and how it has emerged as a way to complement virtualized services.

aws,mean stack,docker

Published at DZone with permission of Vishal Kumar. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}