Deploying the Winds API to AWS ECS With Docker Compose
Let's take a look at deploying the Winds API to AWS ECS with Docker Compose. Also explore tagging and pushing.
Join the DZone community and get the full member experience.
Join For FreeWinds is a popular RSS and Podcast application provided by Stream — a service that allows you to build news and activity feeds. Winds is 100% open-source and the backend is easy to install in a local environment or in the cloud — a task that we will cover in this tutorial. To ensure that you make it through the tutorial, make sure to complete all of the prerequisites.
Prerequisites
As with any tutorial, there are some requirements that come with it. For this post, you'll need to ensure that you have the following up and running, and ready to go prior to continuing on. If you decide to skip the requirements, you'll likely get hung up at one spot — and we don't want that to happen.
- Amazon Web Services (AWS) account with Full Access to ECS and ElastiCache
- A fresh clone of Winds from https://github.com/GetStream/Winds
- An account with MongoDB Atlas or another MongoDB provider (we recommend MongoDB Atlas)
- A free account with Stream
- An instance of AWS ElastiCache setup and running Redis (copy the URI as you’ll need it shortly)
- A free API key from Mercury (this handles RSS article parsing, so it’s very important)
- A free set of credentials from Algolia
- AWS CLI installed on your machine
- ECS CLI installed in addition to AWS CLI
- An account on Docker Hub (you can use another provider if you like; however, I highly recommend sticking with Docker Hub)
One additional thing that I'd like to mention is that you should have the following permissions (or similar) on your AWS account:
- SecretsManagerReadWrite
- IAMFullAccess
- AmazonEC2ContainerRegistryFullAccess
- AmazonECS_FullAccess
Setting Up Dependencies
As we provided an exhaustive list above, hopefully you have had a chance to go through the various steps and copy your third-party URIs and credentials to move forward. The next step requires that we modify the docker-compose-aws file located in the /api directory of Winds.
The file will look like this when we start:
version: '3.7'
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- '8080:8080'
logging:
driver: "json-file"
options:
max-size: "100MB"
max-file: "3"
environment:
NODE_ENV: production
DOCKER: 'true'
PRODUCT_URL: https://getstream.io/winds
PRODUCT_NAME: Winds
PRODUCT_AUTHOR: Stream
DATABASE_URI: mongodb://database/WINDS
CACHE_URI: redis://cache:6379
JWT_SECRET: INSERT_JWT_SECRET_HERE
API_PORT: 8080
STREAM_API_BASE_URL: https://windspersonalization.getstream.io/personalization/v1.0
STREAM_APP_ID: INSERT_STREAM_API_ID_HERE
STREAM_API_KEY: INSERT_STREAM_API_KEY_HERE
STREAM_API_SECRET: INSERT_STREAM_API_SECRET_HERE
ALGOLIA_WRITE_KEY: INSERT_ALGOLIA_WRITE_KEY_HERE
MERCURY_KEY: INSERT_MERCURY_KEY_HERE
Fill in the credentials as instructed in the docker-compose-aws.yml file. Don't forget a random value for your JWT.
You should end up with a file that looks something like this:
version: '3.7'
services:
api:
build:
context: .
dockerfile: Dockerfile
ports:
- '8080:8080'
logging:
driver: "json-file"
options:
max-size: "100MB"
max-file: "3"
environment:
NODE_ENV: production
DOCKER: 'true'
PRODUCT_URL: https://getstream.io/winds
PRODUCT_NAME: Winds
PRODUCT_AUTHOR: Stream
DATABASE_URI: mongodb+srv://YOUR_USERNAME:YOUR_PASSWORD@production-z0pb8.mongodb.net/WINDS?retryWrites=true
CACHE_URI: winds.x3kxi3.0001.use1.cache.amazonaws.com
JWT_SECRET: bxRVJkuuMSvGw1dk8OQkFMxCVbxyOE
API_PORT: 8080
STREAM_API_BASE_URL: https://windspersonalization.getstream.io/personalization/v1.0
STREAM_APP_ID: 12274
STREAM_API_KEY: x8d94mnswb54
STREAM_API_SECRET: btsuuncq93dsppnef3x8uwz76qdg7xuzcefn3jrfsqdknt4c5m9jqx5t2423aceecd
ALGOLIA_WRITE_KEY: 2f67f0b715723442796eeb08be29f2bcd
MERCURY_KEY: WNVuoBObRrvZRBG3SCsiAwN5dfdKIJ6x
Note: We're using the docker-compose-aws.yml file over a docker-compose.yml file because we have two docker-compose files inside of the same directory. By appending "-aws" to the file, we can easily specify what file want to hit when building the environment.
Getting up and Running With ECS CLI
The Elastic Container Service command line interface by Amazon Web Services (AWS CLI) provides high-level commands to simplify creating, updating, and monitoring clusters and tasks from a local development environment.
What's important here is that the ECS CLI supports Docker Compose files, which is what we've used to define how our application can and should run in the cloud. While it's meant for multi-container applications (we have a file for this as well in docker-compose-aws.yml), we're going to be using a single container application for the purpose of this tutorial.
Let's go ahead and configure the AWS ECS CLI so that we can get up and running. First, we'll create a "profile" using the following command:
ecs-cli configure profile --profile-name profile_name --access-key YOUR_AWS_ACCESS_KEY_ID --secret-key YOUR_AWS_SECRET_ACCESS_KEY
Next, well complete the configuration with the following command:
ecs-cli configure --cluster winds-api --default-launch-type EC2 --region us-east-1 --config-name winds-api
Note: Substitute launch type with the launch type you want to use by default (EC2) region_name with your desired AWS region, cluster_name (WINDS) with the name of an existing Amazon ECS cluster or a new cluster to use, and configuration_name (WINDS) for the name you'd like to give this configuration.
Creating a Cluster With an EC2 Task
AWS ECS needs permissions so that your EC2 task can store logs in CloudWatch. This permission is covered by the task execution IAM role. For that, we'll need to create a task execution IAM role using the AWS CLI.
1. Create a file named task-execution-assume-role.json with the following contents:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
2. Create the task execution role (in the same directory as task-execution-assume-role.json):
aws iam --region us-east-1 create-role --role-name ecsTaskExecutionRole --assume-role-policy-document task-execution-assume-role.json
3. Attach the task execution role policy:
aws iam --region us-east-1 attach-role-policy --role-name ecsTaskExecutionRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
Create a Cluster and Security Group
Next, we'll create an Amazon ECS cluster with security groups.
1. We've specified EC2 as the default launch type in the cluster configuration, so the following command creates an empty cluster and a VPC configured with two public subnets:
ecs-cli up --capability-iam EC2
Note: This command may take a few minutes to complete while resources are created. You will also need to take note of the VPC and Subnet IDs that are created – we’ll be using those shortly.
2. Using the AWS CLI, create a security group using the VPC value from the output in the previous command:
aws ec2 create-security-group --group-name "YOUR_WINDS_API_SECURITY_GROUP" --description "winds-api" --vpc-id "YOUR_VPC_ID"
3. Using AWS CLI, we're going to add a security group rule to allow inbound access on port 80:
aws ec2 authorize-security-group-ingress --group-id "YOUR_SECURITY_GROUP_ID" --protocol tcp --port 80 --cidr 0.0.0.0/0
Specify Parameters for AWS ECS
In addition to the docker-compose-aws.yml file that we've created for you, you'll need to create an ecs-params.yml file with the following contents:
version: 1
task_definition:
task_execution_role: ecsTaskExecutionRole
ecs_network_mode: awsvpc
task_size:
mem_limit: 0.5GB
cpu_limit: 256
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "YOUR_SUBNET_ID_1"
- "YOUR_SUBNET_ID_2"
security_groups:
- "YOUR_SECURITY_GROUP_ID"
assign_public_ip: ENABLED
Note: This params file is specific to AWS ECS and is required if you want to run the Winds API on AWS. The values you need to specify can be found in your previous requests from above.
Deploying Image to Docker Hub
In this section, we'll outline how to build, tag, and upload the Winds API to Docker and AWS.
Creating and Uploading
For this step, you'll need to login to Docker with the following command:
docker login
Then, run the following command to build the Docker image (you must be inside of the /api directory):
docker build . -t winds
Tagging and Pushing
To start, you will need to get the Docker Image ID, you can run docker image list and it will output all of your Docker images. Grab the ID from the one tagged as "winds" and drop it in the command below.
The command you'll need to properly tag the image is:
docker tag <YOUR_DOCKER_IMAGE_ID> <YOUR_DOCKER_USERNAME>/winds
Now, it's time to push the tagged image to AWS. You can do that with the following:
docker push <YOUR_DOCKER_USERNAME>/winds
It will take around 30 seconds but when it's complete.
Deploying to a Cluster
Now that we have our files and infrastructure configured, we can go ahead and deploy the Docker compose file to ECS with the following command:
Note: By default, the command looks for files called docker-compose.yml in the current directory; because we have two files, we need to specify a different docker compose file with the –file option (or -f for short).
ecs-cli compose --file docker-compose-aws.yml --project-name winds up
If all went well, you should see the following in your ECS console! If you click on the task, you will notice that there is a public IP address that will allow you to view the API (it should respond with "pong").
Done!
I hope that you enjoyed this tutorial on how to deploy Winds to AWS using Docker. In future posts, I'll outline how to do the same deployment but on Google and Digital Ocean.
If you're interested in deploying the front-end, check out this post, which outlines how to do so using AWS S3 and CloudFront.
Happy coding!
Published at DZone with permission of Nick Parsons, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments