Over a million developers have joined DZone.

From S3 to CloudFront: Caching for Performance

DZone's Guide to

From S3 to CloudFront: Caching for Performance

· Cloud Zone ·
Free Resource

Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.

After making the legwork to get your static resources running on S3, it’s really just a matter of throwing a few digital switches to get them into Amazon’s Cloudfront CDN. And why would you want to do this? Simple answer – performance. Amazon’s CDN has enough strategically located datacenters throughout the world to reach your audience with the least amount of lag and fewest hops.

But, there are some definite caveats with such heavy caching of your web application’s resources. Namely, how to flush these caches – or should you go the “resource.name.build_id” route? And what about SSL requests? Read on to see how I solved these challenges.

Provisioning your CDN

Log into your AWS Management Console and click on the Cloudfront tab. Then click “Create Distribution”:

Choose the S3 bucket as the origin and make sure you enter your static resource domain name in the CNAME field. This let’s Amazon know from which domain to expect incoming requests. After saving, you’ll be given a *.cloudfront.net domain name. Finally, point your static domain DNS entry (i.e. t.ndimg.de) to the assigned cloudfront.net domain. Wait until the DNS dust settles and do a couple GET requests on your resource file. You should eventually see X-Cache:Hit from cloudfront in the response header. Congratulations, you’re in the cloud!

Cache flushing and SSL considerations

As we’re not quite living in Nirvana, there are problems to consider. If you change your static resources, you’ll need to either create dynamic names at deploy time, or flush the cloud cache. As we don’t touch our static resources so often, we use boto to do the AWS heavy lifting and manually delete what’s been changed ($0.005 per item after the 1000th deletion in a month).

SSL is problematic. Looking back, I should have just stuck with our own servers to serve static SSL content (as we don’t have that many requests). Instead, I decided to be holier than the Pope and have everything served from AWS. To accomplish this, I needed to add our cloudfront.net URL to our application configuration for SSL requests. Not pretty.

What did it bring us?

Making the pingdom check on our CSS file before migrating to AWS makes your sales pitch to management trivial. Pretty simple (and convincing) graph:

S3 and Cloudfront were my first experiences of the “Cloud” and I have to say there’s nothing fluffy or insubstantial at all about it. We’ve realized increased performance to end users and more throughput for our internal servers – all for about the cost of one additional server in our current data center. What have your experiences been?

Join us in exploring application and infrastructure changes required for running scalable, observable, and portable apps on Kubernetes.


Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}