Over a million developers have joined DZone.

Cache-Tier Python File Server

Cache-tier Python file server is a cost-effective alternative to Amazon S3 for delivering files and is available on GitHub.

· Database Zone

Build fast, scale big with MongoDB Atlas, a hosted service for the leading NoSQL database. Try it now! Brought to you in partnership with MongoDB.

About a month ago I decided to move the audio traffic (MP3’s, etc.) for my podcast Talk Python To Me. I realized that while I had been using Amazon S3 to deliver the files and it was working wonderfully in terms of delivery, it was getting expensive. There are other hosting platforms that will let me spin up a Linux server and deliver the files from that server at a much lower cost.

But I had some reservations. First and foremost, I manage a lot of servers and thinking of keeping another one running was not appealing to me. Second, the audio stream of Talk Python is literally the lifeblood of Talk Python To Me and I owe it to both my listeners and sponsors to keep that flowing.

Towards this end, I have built a client (on PyPI) and server (in Python 3 using Flask). The client has the property that if it ever detects a problem with the cache server it will automatically switch back to serving from the source location. In my example, if the cache server has issues it will fall back to serving files directly out of S3. This gave me enough confidence to not worry about depending on yet another layer / tier.

Moreover, the server is super easy to setup and is built to automatically sync with the files that it needs to server from the source. So for example in my case, imagine the web server were trying to serve 100_yay_show_one_hundred.mp3 and that file existed in a preconfigured location on Amazon S3. The very first request to the cache tier would trigger the cache server to get it from S3 and all subsequent requests on the main website would detect the presence of this file in the caching tier and use the cache server rather than the more expensive S3.

You’ll notice I speak often of price. But we could just as well swap it for speed or reliability. Imaging you have a slow source server, this same setup would allow you to offload almost all the network traffic and serve it out of the (presumably) much faster cache server. Additionally, I have much more control over my own server and can get better analytics rather than just sending them off to S3.

This setup serves a tremendous amount of data without any glitches. When a new podcast episode is released, the actual network traffic hits around 900 Mbit/sec for several minutes. Yet the CPU load and memory usage remains very low, the latency of the web app is low, and things keep on serving.

Introducing Cache Tier

I call my project cache-tier. It’s of course open source on GitHub at:


and you can get the client via pip:

  pip3 install cache-tier

It’s built for Python 3 but should be easily converted to Python 2 if there is interest.

You can read the steps to setup a bare Ubuntu server to run the server here:


Create a new Ubuntu VM and follow along. You’ll be running in no time.

Using the Client

The client code is on PyPI as cache-tier. To get started just pip install it:

  pip3 install cache-tier

Then you use it as follows.


Call verify_file(base_file_name) to check for a cached file as well as to trigger a sync if needed. If verify_file returns True it’s safe to send the user (via a 301 redirect) to the URL generated by build_download_url(base_file_name).

Configuring the Server

One possible deployment is to use nginx + uWSGI + the web app. This document describes the steps to set this up on Ubuntu:


You will need to set a few settings. Change these two settings before the server-side will work correctly:

File: ./etc/nginx/sites-available/cache_tier_webapp
Value: server_name downloads.YOURDOMAIN.com;

File: ./config_data/prod.json
File: ./config_data/dev.json
Value: download_base_url: “https://s3.amazonaws.com/USERNAME/BUCKET/FOLDER/”

(Note: Amazon S3 is just one option, any public HTTP file server would work)

Now it's easier than ever to get started with MongoDB, the database that allows startups and enterprises alike to rapidly build planet-scale apps. Introducing MongoDB Atlas, the official hosted service for the database on AWS. Try it now! Brought to you in partnership with MongoDB.

nosql,amazon s3,data delivery,database,github

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}