AWS Velocity Series: Running Your Application
AWS Velocity Series: Running Your Application
The best options for running an application on AWS are highly available, scalable, secure, and provide frictionless deployment and operations.
Join the DZone community and get the full member experience.Join For Free
Can you release faster without sacrificing quality? See how with our free ebook Strategies for a Successful Test Automation Project and a free trial of Ranorex Studio today!
Most of our clients use AWS to reduce time-to-market following an Agile approach. However, AWS is only one part of the solution. In this article series, I show you how we help our clients to improve velocity: the time from idea to production.
Running Your Application
There are many options when it comes to running an application on AWS: EC2-based, containerized, or serverless. Choosing the best option for your specific use case is important.
All options that I present are what I call production-ready:
- Highly available: No single point of failure.
- Scalable: Increase or decrease the number of instances based on load.
- Frictionless deployment: Deliver new versions of your application automatically without downtime.
- Secure: Patching operating systems and libraries frequently, follow the least privilege principle in all areas.
- Operations: Provide tools like logging, monitoring, and alerting to recognize and debug problems.
Let’s start by introducing the available options to you. After that, I present a comparison table to you.
An EC2-based app runs directly on a virtual machine: the EC2 instance. You can choose a flavor of Windows or Linux as your operating system. You get root access to the virtual machine so there are no limitations in what you can install and configure. But keep in mind that you are also responsible for the operating system and all installed software. Patching is your job.
By default, a single EC2 instance can not guarantee high availability. That’s why you need more than one EC2 instance. An Auto Scaling Group can manage such a fleet of EC2 instances for you. And as the name implies, the Auto Scaling Group is also a building block when it comes to scalability. To provide a stable endpoint for your users, you also need a load balancer in front of your dynamic fleet of EC2 instances.
The way you deploy software is not defined by AWS. You can download your software during the start of the virtual machine, create your own AMIs with your software backed in, or use configuration management tools to install what you need. Again, many choices but also many responsibilities.
You may miss Elastic Beanstalk or OpsWorks here. The last four years of using AWS in many client projectshave shown that those services come with too many limitations for running existing apps. This is different if the app is build from scratch. But if an app is build from scratch I would suggest a serverless approach!
A containerized app (Docker) can run on ECS: the container cluster service from AWS. ECS runs on top of EC2 instances that you have to manage. ECS makes it very easy to schedule containers in an intelligent way (zone-aware) and also restarts failed containers. ECS comes with a nice integration with the load balancer. You can run all your applications on a single ECS cluster which lead to a better utilization of the underlying hardware compared to running directly on EC2.
To deploy software you need to create a Docker image, push that image to a Docker repository, and than ECS will take care of the rest. You are responsible for publishing a Docker image while AWS takes care of the rest.
A serverless app (i.e., API Gateway and Lambda) is operated completely by AWS. You upload your source code, and AWS will run it for you. The underlying compute infrastructure is abstracted away and you don’t have access to it any longer (i.e., no SSH possible). AWS provides you access to the logs your application generates and some metrics that you can use to debug problems of your application.
When comparing the available options, you can take different perspectives. In this article series, you look at AWS from the velocity perspective. When your goal is to deliver fast, it is important to minimize the work that is not related to your goal: the application running in production. The table shows your responsibilities.
i.e., Node.js, JVM
(i.e., express, Apache, Nginx)
|Logging||you & AWS||you & AWS||AWS|
|High availability||you & AWS||you & AWS||AWS|
|Scalability||you & AWS||you & AWS||AWS|
|Alerting||you & AWS||you & AWS||you & AWS|
|App source code||you||you||you|
It’s also important to look at the limitations:
|EC2-based app||Linux or Windows|
|Serverless app||Node.js, Java/JVM, Python, C#
max 5 minute execution
Given the limitations I mentioned, I recommend that you pick the solution that minimizes your responsibilities and still fits your requirements. This should be an excellent starting point to achieve AWS Velocity.
The series continues with a deep dive into the three available options to deploy your application to AWS.
Published at DZone with permission of Michael Wittig , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.