Running a MEAN App in Docker Containers on AWS
Running a MEAN App in Docker Containers on AWS
Learn how to run MongoDB in a separate container from the web application. There are lots of benefits to this approach of having isolated environments
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
The rate of adoption of Docker as a containerized solution is soaring. A lot of companies are now using Docker containers to run apps. In a lot of scenarios, using Docker containers can be a better approach than spinning up a full-blown virtual machine.
In this post, I’ll break down all the steps I took to successfully install and run a web application built on the MEAN stack (MongoDB, Express, AngularJS, and Node.js). I hosted the application in Docker containers on Amazon Web Services (AWS).
Also, I ran the MongoDB database and the web application in separate containers. There are lots of benefits to this approach of having isolated environments:
- Since each container has its own runtime environment, it’s easy to modify the environment of one application component without affecting other parts. We can change the installed software or try out different versions of the softwares, until we figure out the best possible setup for that specific component.
- Since our application components are isolated, security issues are easy to deal with. If a container is attacked or a malicious script ends up being inadvertently run as part of an update, our other containers are still safe.
- Since it is easy to switch out and change the connected containers, testing becomes a lot easier. For example, if we want to test our web application with different sets of data, we can do that easily by connecting it to different containers set up for different database environments.
MEAN Web Framework
Another great advantage of MEAN.JS is that we can use Yeoman Generators to create the scaffolding for our application in minutes. It also has CRUD generators which I have used heavily when adding new features to the application. The best part is that it is already well set up to support Docker deployment. It comes with a Dockerfile that can be built to create the container image, although we will use a prebuilt image to do it even faster (more on this later).
Running Docker on an Amazon Instance
You might already be aware that you can use basic AWS services free for a full year. The following steps will walk you through how to configure and run a virtual machine on AWS along with Docker service:
- To begin, create your free account on aws.amazon.com. Make sure to choose the Basic (Free) support plan. You will be redirected the AWS welcome page.
- On this page, click “Launch Management Console.” Please bear with me as we will be clicking a lot of Launch buttons before we actually launch the instance. You will be redirected to a page that lists a plethora of AWS services.
- Click the first option on the top left called “EC2 (Virtual Servers in the Cloud).”
- Click “Launch Instance.” You will be prompted to select an image. I chose the first one, “Amazon Linux AMI,” but if you are more comfortable with any other Linux version, go ahead and choose that one.
- Click the Launch button.
- Next, you will be asked to choose an Instance Type. You can select the first option. The top nav bar displays all the steps in choosing our configuration, but the only custom configuration I needed to do was bump up the storage space for my instance. My first attempt with the default 8 GB failed as my Docker images needed more space to run, especially the MongoDB image that I chose which has a default setting of more than 3 GB to store its journal.
- To change the storage, click the fourth option, “Add Storage.”
- Then change the storage space from 8 GB to 16 GB and click “Review and Launch.” You will be taken to a final Review screen.
- Click “Launch” again. You will then be prompted to create a key pair that will be used to SSH into the instance after it is created. I like this being part of the flow as connecting to the instance will obviously be the first thing I would want to do after the instance is launched.
- Choose “Create a new key pair” from the drop down and then type in a filename.
- Finally click “Download Key Pair” to store this file on your computer to a folder. We will need to reference this file when we SSH into our instance.
- Ok, so for the last time, click “Launch Instance” button. Your instance should be up and running in a few minutes. You will see a “View Instance” link to go to the EC2 dashboard where you can see the details of the running instance.
- If you select your instance in the list and then click “Connect” at the top, there are some clear instructions to SSH into the box. In the following screenshot, they recommend using PuTTY, but I used my “Git Bash” application. It was already available on my box and supports the SSH client.
Note: I will SSH through my Mac Terminal for the purpose of this blog, but I have successfully been able to do the same through a Windows box before. It’s not much different.
- From the Terminal, navigate to the folder where you saved the Key pair file. Run the below commands to change the permission of the file as instructed (see above screenshot) and then SSH into the AWS instance:
MacBook-Pro:Documents vishalkumar$ chmod 400 meanjs_app_on_AWS.pem MacBook-Pro:Documents vishalkumar$ SSH -i meanjs_app_on_AWS.pem firstname.lastname@example.org The authenticity of host '18.104.22.168 (22.214.171.124)' can't be established. RSA key fingerprint is ef:23:dc:8d:73:93:1e:d4:90:f7:4e:9b:50:67:f5:1b. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '126.96.36.199' (RSA) to the list of known hosts. Run "sudo yum update" to apply all updates. [ec2-user@ip-172-31-40-40 ~]$
- Following the recommendation that we get after connecting, run the yum update command:
[ec2-user@ip-172-31-40-40 ~]$ sudo yum update
- Once the update installs, install Docker:
[ec2-user@ip-172-31-40-40~]$ sudo yum install -y docker
- Start the Docker service:
[ec2-user@ip-172-31-40-40 ~]$ sudo service docker start Starting cgconfig service: [ OK ] Starting docker: [ OK ] [ec2-user@ip-172-31-40-40 ~]$
- Add the ec2-user to the Docker group so you can execute Docker commands without using sudo:
[ec2-user@ip-172-31-40-40 ~]$ sudo usermod -a -G docker ec2-user
- Finally, log out and SSH back in. Run “Docker Info” to see if the command is successful without using sudo. You should see some stats returned by Docker:
[ec2-user@ip-172-31-40-40 ~]$ logout Connection to 188.8.131.52 closed. MacBook-Pro:Documents vishalkumar$ SSH -i meanjs_app_on_AWS.pem email@example.com [ec2-user@ip-172-31-40-40 ~]$ docker info Containers: 0 Images: 0 ..... ..
Running MongoDB Database as a Container
Now that we have Docker running on our Amazon instance, we can go ahead and run our containers.
As I mentioned before, we’re going to run our MongoDB database and our web application on separate containers. I chose the official repo for Mongo on the docker repository. We can pull this image and run it as a detached container in one simple step:
[ec2-user@ip-172-31-40-40 ~]$ docker run --name mymongodb -d mongo
The last argument mongo is the name of the image from which it should create the container. Docker will first search for this image locally. When it doesn’t find it, it will go ahead and download it and all the base images that it is dependent on. Convenient!
Docker will then run this image as a container. -d flag ensures that it is run in detached mode (in the background) so that we can use this same shell to run our other commands. We can do a quick check after this to make sure that this container is up and running by using the docker ps command:
[ec2-user@ip-172-31-40-40 ~]$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2f93a31d4a3d mongo:latest "/entrypoint.sh mong About a minute ago Up About a minute 27017/tcp mymongodb
The startup script script for this image already runs the mongo service listening on 27017 port by default. So there is literally nothing else we had to do here except for that one docker run command.
Running the MEAN Stack Container
The next phase of this project is to run our web application as a separate container.
The MEAN stack code base has a lot of dependencies like Node, Bower, Grunt, etc. But once again, we don’t need to worry about installing them if we have an image that already has all these dependencies. Turns out there is an image on the Docker Hub that already has everything we need.
Once again, we will pull it in and run it with just one command:
[ec2-user@ip-172-31-40-40 ~]$ docker run -i -t --name mymeanjs --link mymongodb:db_1 -p 80:3000 maccam912/meanjs:latest bash ... .. Status: Downloaded newer image for maccam912/meanjs:latest root@7f4e72af1cf0:/#
Now there is a lot going on with this single command. To be honest, it took me some time to get it exactly right.
- The most important piece here is the –link mymongodb:db_1 argument. It adds a link between this container and our mymongodb container. This way, our web application is able to connect to the database running on the mymongodb container. db_1 is the alias name that we’re choosing to reference this connected container. Our MEAN application is set to use db_1, so it’s important to keep that name.
- Another important argument is -p 80:3000, where we’re mapping the 3000 port on the container to port 80 on the host machine. We know that web applications are accessed through the default port of 80 using the HTTP protocol. Our MEAN application is set to run on port 3000. This mapping enables us to access the same application from outside the container over the host port 80.
- We of course have to specify the image from which the container should be built. As we discussed before, maccam912/meanjs:latest is the image we’ll use for this container.
- The -i flag is for interactive mode, and -t is to allocate a pseudo terminal. This will essentially allow us to connect our terminal with the stdin and stdout streams of the container. This stackoverflow question explains it in a little more detail.
- The argument bash hooks us into the container where we will run the required commands to get our MEAN application running. We can bash into a previously running Docker container, but here we are doing all that with just one command.
Building and Running our MEAN Application
Now that we’re inside our container, running the ls command shows us many folders including one called Development. We will use this folder for our source code.
cd into this folder and run git clone to get the source code for our MEAN.JS application from GitHub:
root@7f4e72af1cf0:/# cd Development/ root@7f4e72af1cf0:/Development# git clone https://github.com/meanjs/mean.git meanjs Cloning into 'meanjs'... remote: .... .. Checking connectivity... done.
cd into our MEAN.JS folder. We can run npm install to download all the package dependencies:
root@7f4e72af1cf0:/Development# cd meanjs root@7f4e72af1cf0:/Development/meanjs# ls Dockerfile LICENSE.md Procfile README.md app bower.json config fig.yml gruntfile.js karma.conf.js package.json public scripts server.js root@7f4e72af1cf0:/Development/meanjs# npm install
A couple of hiccups to watch out for: For some reason, my npm install hung during a download. So I used Ctrl + C to terminate it, deleted all packages to start from scratch, and ran npm install again. Thankfully, this time it worked:
^C root@7f4e72af1cf0:/Development/meanjs# rm -rf node_modules/ root@7f4e72af1cf0:/Development/meanjs# npm install
Install the front-end dependencies running by running bower. Since I’m logged in as the super user, bower doesn’t like it. But it does give me an option to still run it by using the –allow-root option:
root@7f4e72af1cf0:/Development/meanjs# bower install bower ESUDO Cannot be run with sudo .... You can however run a command with sudo using --allow-root option root@7f4e72af1cf0:/Development/meanjs# bower install --allow-root
Run our grunt task to run the linter and minimize the js and css files:
root@7f4e72af1cf0:/Development/meanjs# grunt build ... Done, without errors.
Now, we are ready to run our application. Our MEAN stacks looks for a configuration flag called NODE_ENV, which we will set to production and use the default grunt task to run our application. If you did all the steps right, you should see this final output:
root@7f4e72af1cf0:/Development/meanjs# NODE_ENV=production grunt ... .. MEAN.JS application started Environment: production Port: 3000 Database: mongodb://172.17.0.1/mean
Validating Our Application from the Browser
Our application would have given errors if there was some problem running it or if the database connection failed. Since everything looks good, it’s time to finally access our web application through the browser.
But first, we’ll need to expose our virtual machine’s port 80 over HTTP.
- Go back to the EC2 dashboard.
- Click on the security group link for the given instance. You should
see the settings page for the security group.
- Click the “Inbound” tab at the bottom, and then click the “Edit”
link. You should see that SSH is already added. Now we need to add
HTTP to the list of inbound rules.
- Click “Add Rule.”
- Select HTTP from the dropdown menu and leave the default setting of
port 80 for the Port Range field. Click “Save.”
- Pick up the URL to our instance from the Public DNS column and hit
that URL from the browser. You should see the homepage of our
fabulous application. You can validate it by creating some user
accounts and signing in to the app.
So that’s it. We’ve managed to run our application on AWS inside isolated Docker containers. There were a lot of steps involved, but at the crux of it all, we really needed to run only two smart Docker run commands to containerize our application.
Published at DZone with permission of Vishal Kumar . See the original article here.
Opinions expressed by DZone contributors are their own.