Over a million developers have joined DZone.

My AWS Experience

· Cloud Zone

Download this eBook outlining the critical components of success for SaaS companies - and the new rules you need to play by.  Brought to you in partnership with NuoDB.

I began working with Amazon Web Services in 2009, three years after its launch. When I first began with AWS, I was (and still am) transitioning from a traditional server experience, applying that knowledge to Amazon directly, thinking about things like bare bones, minimal installs, and traditional bottlenecks. It wasn’t until very recently that I stepped up my AWS game and actually familiarized myself with building out secure and easily scalable infrastructures.

The driving force behind this self-education was our most recent Innovation Day at DataXu, Innovation Day X, the 10th installment of our biennial day-long company-wide think-tank (try saying that ten times fast). During my first Innovation Day at DataXu two years ago, I created a search engine for RFPs called RFP Gold, detailed in the first article that I published here on DZone. This year, we (a team of five, including myself) began planning something far more complex well in advance of the competition, which kicked off this past Thursday. Our project required public-facing infrastructure in AWS, secure infrastructure internal to one of our datacenters, and integration into our best-in-class business intelligence and visualization suite. One teammate and myself were entirely responsible for building the solution, including the infrastructure and software, and so we started developing on Friday, and, forgoing sleep, kept going until the first round of presentations Thursday afternoon.

Easy Analytics Infrastructure

Above, I was responsible for the box marked “AWS"


It was serendipity that around the same time I was handed the keys a new AWS account intended to be customer facing, aligning perfectly with our need to create a self-service product without exposing our secure systems directly. I had never created a VPC (Virtual Private Cloud), nor gone beyond creating EC2 instances and Elastic Load Balancers, and while I wasn't daunted by the task my intuition told me that it would take some time.

I had to begin by creating a new VPC, and creating the associated subnets, routing tables, Internet gateway, and so on. This turned out to be so much easier than the same setup on traditional hardware and software in a conventional server environment; 15 minutes and a couple of tries later and I went from nothing to finished. My next challenge was getting RDS set up, and once again I was pleasantly surprised at how fast it was to get a multi-availability zone database created, in this case running PostgreSQL (my favorite).

There are some pitfalls with RDS, the biggest of which is that you have what is essentially a root account, but does not have the same privileges as “superman.” This was confusing at first, and required some manipulation to get the levels of access I required for my front and back-end systems without exposing the root account. As you can see above the architecture makes use of S3 and RDS for inter-datacenter communication, with much limited accessibility for security. Creating IAM roles and users, and then creating the S3 bucket policy followed.

The S3 bucket policy is admittedly a little confusing. It’s one of the few pieces of AWS that doesn’t have a slick UI with fields to fill in that match your expectations. It does have a policy creator tool, but this definitely fell short of my goals, and as a result I ended up amending and appending this policy. Luckily this wasn’t my first time creating IAM roles and users, nor creating an S3 bucket policy, and as a result the process went more smoothly, with only a couple of perplexing moments. By this stage of development all components infrastructure had specific access controls, and we no longer needed to worry about database availability or redundancy by using RDS.

With security, data redundancy, and access out of the way, I could now focus on the issue of availability. Scale isn’t a requirement when creating a prototype, but the ability to scale is now baked into a product along with availability when using a suite of services like AWS. The Elastic Load Balancer needs to be set up independently of other parts of the project, and an SSL cert needs to be installed here if you wish to offload SSL. As a side note I would highly recommend this, as otherwise each of your private machines behind the load balancer needs SSL set up, which adds overhead and makes automatic scaling slightly more complicated. Once this was set up, with a build machine behind it for testing, I began installing and tweaking the system dependencies (built on Amazon Linux), and testing my code.

I went through around ten revisions of my AMI on a small build machine before I started playing with launch configurations and auto-scaling groups. I asked for a couple of instances, I made a guess at the health check grace period and cool-down times, and I totally missed some necessary sections related to Elastic Load Balancing in my haste, and as a result of my inexperience I messed up my first attempt rather spectacularly. Suddenly machines were being shut down and booted up every five minutes. The testing we had been doing came to a grinding halt while I pieced together my mistakes, and, in the process, I learned a lot. With that acquired knowledge I was able to correct my mistakes quickly, and keeping in mind valuable lessons learned (scale up fast, and down slowly) from our internal AWS guru I was able to provide a future scale plan. With all of the components in place I was now free to embed my solution, providing a seamless demo.

There are still a ton of gaps in our prototype, but after presenting to a group of judges on Thursday we were allowed to take part in the company finals on Friday where we took home 2nd place! Through this project I was able to gain first-hand experience in areas of AWS I had not had the time to explore, and found it to be more user-friendly and quicker to understand than my initial expectations while remaining incredibly powerful. I’m not sure how other IaaS providers stack up (the Gartner Magic Quadrant for 2015 for IaaS still has AWS as the front-runner amongst leaders), but AWS surpassed my expectations, and in doing so has me convinced of the future of IaaS as a business for businesses.

Learn how moving from a traditional, on-premises delivery model to a cloud-based, software-as-a-service (SaaS) strategy is a high-stakes, bet-the-company game for independent software vendors. Brought to you in partnership with NuoDB.

Topics:

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}