The AWS Olypmics: Speed Testing Amazon EC2 And S3 Across Regions
Join the DZone community and get the full member experience.Join For Free
we set up a test using ec2 instances and s3 buckets in all amazon aws regions currently available - three in the u.s (oregon/california/virginia), one in south america (brazil), one in europe (ireland), two in asia (singapore/japan) and one in australia.
highlights (the good, the bad and the ugly)
- as expected, the best upload time was achieved when the ec2 instance and s3 bucket shared the same location.
- when an ec2 instance and s3 bucket were in the same location, the fastest upload time was achieved in oregon and the slowest in virginia.
- california’s combined upload time was the best.
- singapore’s combined upload time was the worst.
- the slowest upload speed was when uploading from australia to brazil.
- uploading from singapore to ireland was 4x slower times than from japan to ireland.
what has changed since 2012?
- singapore’s combined up-speeds were the best -- a surprising finding!
- europe’s combined upload time was the slowest.
- when ec2 instance and s3 buckets were in the same location, the best upload time was achieved in singapore and the slowest in california.
- california struggled much more with traffic to brazil than the other u.s regions.
- the slowest upload speed was between ireland to japan.
the java code used for the test is available on github: https://github.com/takipi/aws-s3-speed . fork, clone, review or simply run the experiment yourself. let us know if you experienced different results.
takipi is cross-platform and is written in java and c++. we first tested some c++ vs. java networking. we ended up dropping the c++ communication modules altogether and went on implementing our http communication only in java. some of the reasons for dropping c++ were cross-platform issues, ease of debugging, reconnection/timeout handling and security.
diving into java, we had two approaches we had to choose from. the first was using the open source aws java sdk to deal with everything (it uses apache http components internally). the second was signing urls and uploading them using plain old java http(s) urlconnection. we soon found that the different methods didn’t have a significant effect on the results, so for the rest of the experiment and post we’ll be talking about the sdk code. mark rasmussen has a great post about pushing s3 upload speeds to the maximum .
the code creates/deletes buckets in all regions (using prefixes). once the buckets are set, the test code uploads a few chunks (10mb, 100mb, etc..) to all regions over several rounds and averages the time it took from the initial call to the end.
the code shuffles the regions at each round, and never sends chunks concurrently. we went for a dozen rounds, removing best and worst scores before averaging. this left us with 10 valid scores to average, which seems to us like a fair number.
- europe’s was pretty slow last year. this year it showed the same speeds as the u.s which is a good thing for both users and developers.
- we’ve learned that the newest region, sydney, australia, still has a lot of catching up to do.
- we’ve confirmed our original assumption that staying inside a region is hands down the best way to guarantee maximum performance.
let me know if you run the test and experience different results or have any questions, i’ll be happy to hear your feedback.
p.s. takipi has a new twitter account, follow us - @takipid
originally appeared on takipi's blog
Opinions expressed by DZone contributors are their own.
Chaining API Requests With API Gateway
A Complete Guide to AWS File Handling and How It Is Revolutionizing Cloud Storage
Integrating AWS With Salesforce Using Terraform