Over a million developers have joined DZone.

A Real-Life Story of Shifting to the Cloud: Part 1

· Cloud Zone

Download the Essential Cloud Buyer’s Guide to learn important factors to consider before selecting a provider as well as buying criteria to help you make the best decision for your infrastructure needs, brought to you in partnership with Internap.

 The content of this article was originally written by Ming Lee over at A Cloudy Place

My company has made a fundamental shift towards cloud computing and specifically towards Amazon Web Services (AWS) over the last 12 months.

Why did we do it? At the technical level it was always something that we wanted to do but like all projects it needed strategic buy-in. The potential cost savings (as published in the media) as well as the flexible nature of the cloud meant that it was foolish not to try it, if only to compare it with the environment of ‘traditional’ hosting. At a strategic level, our parent company had already migrated some of their main-line hosting applications to the cloud via AWS, so there was a path already set out (albeit with differences in application and customers).

So the process of migrating to the cloud took a number of distinct steps.

1. Planning

2. Test Deployment

3. Parallel Running

4. Go Live

5. Update Documentation and Processes

In this article, we’ll look at how we tackled the first two steps, and the issues we came across along the way. Part 2 will cover the rest of the process, and include my thoughts on migrating to the cloud.

 

PLANNING

One needs a firm grip on any deployed architecture, as under AWS pricing it is the architecture that determines the cost per month. Getting it wrong or missing something is reflected in an increase in monthly cost. So, before we started moving to the cloud, we had to ask ourselves, ‘Is our architecture flexible, and is it easy enough to migrate over? More importantly, do we have the system architecture documented; do we know what pieces we’re working with?’

Unfortunately, what we had was not quite up to scratch, so the first step was an inventory of the systems ear-marked for ‘cloud deployment’. It was a useful exercise and took about four weeks to complete, though the process should now become an on-going one. At the end of this exercise, we had a good idea of the proposed architecture. With this, I had to sit down and justify the cost of this venture to my project team and to those above me. Initially I went over to the product pricing pages (http://aws.amazon.com/ec2/#pricing for EC2 pricing and http://aws.amazon.com/rds/pricing/ for RDS prices) and threw together a ton of spreadsheets and tried to work out an annual cost. It caused a bit of a headache as AWS pricing also includes a lot of variables related to data transfer, the use (or non-use) of Elastic IP addresses, and volume storage, to name but three. I did eventually find a monthly cost calculator, which proved to be extremely helpful (http://calculator.s3.amazonaws.com/calc5.html) – if only I knew of this tool when I first started on our AWS adventure. Of course, this tool’s accuracy is vastly improved if you can insert as many of the relevant bits of information into the appropriate boxes as you can; the information I gained from the  review and inventory step was incredibly useful for this.

So I’d got my architecture and now a proposed monthly figure in Euros; I also added 15% contingency, as you just know something will happen! I calculated the annual cost and presented it for budgetary consideration. Since the decision to move to the cloud was already made, surely my budget would not deter them? Luckily, my calculations were not that far off and well within the budget. So, green light and all systems go! It was a near thing, as most people didn’t think past the initial headline costs (‘instances from $0.03 an hour!’) and were mildly surprised at the proposed figure.

 

TEST DEPLOYMENT

With a plan and a road-map now in mind, it was time to prototype the deployment. We had to get used to using the AWS dashboard and the available tools, as well as observing the performance of the applications in this new and exciting environment. Now, back when we started, a lot of the products were not available so the functionality was limited.

A test environment of a SimpleDB with a micro instance (the smallest AWS virtual machine available) was commissioned and then tested. SimpleDB is AWS’ simple and flexible non-relational data store. It was quick and easy to set-up and worked for the initial testing and deployment stage. We used it to store all our user and audit data. It was quite cheap when we started, but now SimpleDB includes 25 machine-hours and 1GB of storage space free per month. More details of Amazon SimpleDB can be found here: http://aws.amazon.com/simpledb/.

The initial test did reveal a number of specific issues: we immediately encountered problems with security groups and the incompatibility of SimpleDB with our security model. However, the ease of deployment from some stock template Amazon Machine Images (AMIs) meant that we could easily set up a new environment in a matter of minutes. The performance of the web site and application was as fast as we had ever seen it and all for a few dollars; a bargain we thought and we were impressed. Of course, there were still unanswered questions over alerting and monitoring, management reporting and metrics, but one step at a time, eh?

Another issue we discovered when using SimpleDB was the relatively slow interaction with the micro instance via the remote desktop. Some additional gotchas: do not use the Windows 2008 32-bit edition on a micro instance. With this version, everything was just dreadfully slow. We were advised by AWS to switch to using the Windows 2008 64-bit edition, and once we switched over, things went a lot more smoothly.

Out of interest we looked at the specifications of the micro instance and this is what we found: 613 MB of memory, up to 2 EC2 Compute Units CUs (ECU) for short periodic bursts, Elastic Block Store (EBS) storage only, and a 32-bit or 64-bit platform. Now, interestingly, from experience the minimum RAM required for Windows 2008 is 512 MB but in reality we don’t commission anything with less than 4GB (32-bit) or 8GB (64-bit) and even though theoretically we had about 100MB of RAM available, once the micro instance had started up, it had about 5MB left and would be frantically paging to disk. Basically, the micro instance cannot be used for anything remotely useful commercially, which is a smart thing to do by AWS.

We also quickly realized that Amazon Web Services were bringing out new products almost every month!

 

RESERVED INSTANCES OR NO? (sorry digression warning!)

It was at this time that we looked at reserved instances. The concept was simple: pay AWS an upfront setup fee and get a much reduced hourly fee for either a one-year or three-year period. If the planned utilization of instances was low (i.e. you would be running an instance only for a few hours per week) then the reserved option was not economical. However, if monthly utilization is high, such as 100%, then reserved instances make financial sense and should be actively considered. Here are some tables I put together to convince my manager that reserved instances were a good idea.

If we’re just paying the on-demand, pay-as-you-go fee then for one large AWS instance the cost is $4,204.80 and this figure does not take into account the use of other AWS products such as S3, RDS, Cloudwatch etc. This is the pure cost of running one large instance, 24/7.

Now, $4,204.80 sounds a lot but for this you get a very high performance server, running Windows 2008 64-bit Advanced. While the servers are virtual, they are equivalent to a quad core server, already licensed and ready to go.

However, if one pays a setup fee of $1,105 the hourly cost drops from $0.48 to $0.205 per hour; with 100% utilization the annual cost drops to $1,795.00 per year, and with the setup fee included this results in a net saving of $1,304.00 per year per instance.

As of November 2011, AWS decided to add some more product differentiation for the reserved instances. Previously there were two rates, one for a one-year deal and another for a three-year deal. Now we have the same two rates but spread out into three ‘utilization’ groups: light, medium, and heavy. The differences relate to how you are charged when you have a reserved instance and it is NOT being used i.e. ‘lights on, no-one home’. We went with heavy utilization.

 

Pretty good numbers and as they say, a real no-brainer if you need to save some money!

That wraps up Part 1 – keep an eye out for Part 2 in the next few days, when I’ll cover how we dealt with parallel running, going live, and updating our documentation and processes.

The Cloud Zone is brought to you in partnership with Internap. Read Bare-Metal Cloud 101 to learn about bare-metal cloud and how it has emerged as a way to complement virtualized services.

Topics:

Published at DZone with permission of Eric Genesky. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}