Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Securing AWS Environments and Infrastructure

DZone's Guide to

Securing AWS Environments and Infrastructure

Step-by-step guides to securing VPCs, dev, and prod environments with Skweez.Me and AWS.

· Security Zone ·
Free Resource

Discover how to provide active runtime protection for your web applications from known and unknown vulnerabilities including Remote Code Execution Attacks.

Two problems all business owners face are in minimizing their technology costs and maintaining customer security. If you are the developer and / or dev ops person (or people) for a small business this poses unique challenges. Do you choose cloud, or traditional datacenter? How do you ensure uptime, availability, and security without sacrificing your sanity, or money for the business, or putting your customers at risk? These are questions that I had to answer when I built Skweez.Me.

In a previous DZone article I wrote about the differences in security between traditional datacenter and Amazon Web Services (AWS). In a following article I wrote about creating an automated infrastructure within AWS for your budding business, one that allows me personally to sleep soundly at night without breaking the bank. In this article, I’ll break down how to do both of these step-by-step as I did with Skweez.Me.

VPC Infrastructure

Securing a VPC in AWS is nearly painless, but requires some groundwork to separate your public and private subnets. To save money, and make VPC selection easiest for your future instances, I recommend modifying the default VPC, splitting each availability zone into a public subnet and a private subnet. The way in which you do this is entirely up to you; I split up the two availability zones that I use into “/21” subnets, which yield plenty of available IPs:

Image title

I also setup a subnet group for my RDS DBs to ensure that the database instances are only allocated in my private subnets as well:

Image title

Next you’ll need a way for your private subnet instances to access the net. This is always accomplished using NAT gateways.

Image title

You will need to ensure that there is a NAT gateway for each public subnet associated with the private subnets to which you are granting outbound communication. In other words, if you want outbound communication to work from your “Private A” subnet you will need to create a NAT gateway in “Public A” and update the routing table appropriately.

 The easiest way to do this, by far, is to create NAT gateways through your VPC directly. In doing this AWS will automatically allocate and assign elastic IPs for each new NAT gateway. The alternative is to create your own NAT instances. Analyzing the pros and cons of each:


EC2 NAT Instances VPC NAT Gateways
Setup Required
  • Security group
  • Instances (one per subnet)
  • Elastic IPs (one per instance)
  • Instance modification to disable source/dest. verification
  • Launch config
  • Auto scaling group
  • Update to routing tables (one per subnet)
  • VPC NAT Gateways (one per subnet)
  • Update to routing tables (one per subnet)
Risks
  • Scale down / up of instances means manual modification of new instance to disable source/dest. verification
  • Scale down / up of instances means reassignment of Elastic IP (and unassigned Elastic IP time costs money)
  • Instances should be setup to update packages for security regularly
  • Not suitable at scale
  • Another AWS black box; no ability to control instance type or scaling rules
  • Potentially expensive for low-scale
  • Deleting a gateway will not clean up its associated Elastic IP (unassigned time will cost money)
Cost As little as $10 / mo / subnet
As little as $30 / mo / subnet

My recommendation is to manage the NAT instances until you’re achieving scale / profitability, at which point using the built-in NAT gateways will be both more efficient and more cost effective. Migrating from instances to built-in NAT gateways is also very easy to do.

Dev Infrastructure

It’s a reasonable assumption that you’re going to start by building out a development environment, one where you can make updates to code and database without impacting the production environment.

I chose small instance types for both the RDS DB and EC2 instance (singular). The RDS DB has no automated backup, no point updates, and is in a single availability zone. When I’m done with the DB and EC2 instance I shut them down, and if I’ve made any changes to the DB I take a final snapshot. The next time I do development I start up the dev DB using the latest snapshot. The only annoyance of this is that the security group cannot be changed during snapshot restore, and can only be changed on a running instance, as a result you will end up starting the DB and then modifying it for the appropriate security group.

The dev DB and EC2 instance have a different set of security requirements from the prod environment:

Dev EC2 Dev RDS Prod EC2 Prod RDS
Subnet Public RDS Private Private RDS Private
Public IP? Yes No No No
Access Via Security Group 80/443: World
22: Public / Private Subnets
Public / Private Subnets Public Subnets
Alternatively, ELB security group (your choice)
Private Subnets


As a result, subnets for the EC2 instances and the security groups for the RDS DBs differ between environments.

You can also choose to create a dev ELB. The tradeoff is that if you use an ELB you’ll need to associate the dev instance to your ELB each time. If you choose to use a public dev instance (with a public IP) you’ll likely want to use your DNS provider (mine is Route53) to point to the public IP of your dev instance. The differences here are small and the results the same in term of cost and effort.

Prod Infrastructure

Here I use a traditional EC2 setup, with an ELB hooked up to an auto scaling group. As stated above the EC2 instances are all on the private subnets, the ELB is on the public subnets, and now both routing and access controls ensure that outsiders cannot communicate directly with my web nodes or database.

The auto scaling group ensures that the service is highly available, maintaining at least two nodes at any given time. I use a build-box that I deploy code to first, cutting an AMI and creating a new launch config, and finally updating the auto scaling group, to ensure that code and settings are always up-to-date, even during scale events.

The database is multi-AZ and is backed up in full every day (for 5 days rolling), thus ensuring that outages in the DB will lead to automated failover and data loss is preventable through both 5-minute point in time snapshots as well as full daily snapshots.

Infrastructure Overview

Putting it all together the final infrastructure looks like this:

Image title

Find out how Waratek’s award-winning application security platform can improve the security of your new and legacy applications and platforms with no false positives, code changes or slowing your application.

Topics:
aws ,security ,cloud ,environments ,vpc

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}