Is OpenStack Object Store a Base for a Video CDN?
Is OpenStack Object Store a Base for a Video CDN?
Join the DZone community and get the full member experience.Join For Free
I would like to share with you some design aspects regarding these systems.
The Video CDN Case Studies
Video CDN includes two main case studies:
- VOD Case Study: This is a long tail/high throughput scenario where you need a high capacity disks, where only a small portion of it will be used extensively. In order to create a cost effective solution you should have:
- High capacity storage system with low IOPS needs. Servers with 24-36 2-3TB SATA disks will provide a up to 100TB raw storage with a tag price of $15K.
- Replication and auto failover mechanism that can distribute content between several servers and can save us from using expensive RAID and Cluster solutions.
- Caching/Proxy mechanism that will serve the head of the long tail from the memory.
- Live Broadcast Case Study: This is a No Storage/high throughput scenario where you actually don't need any persistent storage (if a server fails, as soon as its get up again, the data will be no longer relevant). In order to create a cost effective solution you should have:
- No significant storage.
- High capacity RAM that should be sized according to:
- The number of channels you are going to serve.
- The number of resolutions your going to support (most relevant when you plan to support handhelds and not just widescreens).
- The amount of time you are going to store (no more that 5 min are needed in case of live, and no more than 4 hours in case of start over).
- In memory rapid replication (or ramdisk based) mechanism, that will replicate the incoming video to several machines.
- HTTP interface to serve video chunks to end users.
Modern Video Encoding systems (such as Google's Widevine) support "Encrypt Once, Use Many", where the content is encrypted once, and decryption keys are distributed to secured clients based on a need to know basis.
Why OpenStack Swift/Object Store?
In short, OpenStack Swift is the OSS equivalent for Amazon propriety AWS S3: "Simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web."
OpenStack Swift/Object Store Benefits
- Open stack is OSS and therefore easy to evaluate.
- Active large scale deployments including RackSpace and Comcast.
- Built in content distribution method that let you distribute the load between multiple servers based on rsync.
- High availability based on data replication between multiple instances. On a server failure, you just need to take it out of the array and replace with a new one, while other servers keep serving users.
- This mechanism help you avoid premium hardware such IO controllers and RAID mechanism.
- Built in HA and DRP based on 5 independent zones.
- Built in web server, that enables you serving static content as well as HTTP based video streams from the server itself, rather than implementing high end SAN.
- Built in reverse proxy service that minimizes IO and maximizes throughput, based on Python and Memcache.
- Built in authentication service
- Target pricing of $0.4 per 1M servings and $0.055/GB per month if we take AWS as a benchmark.
The OpenStack Object Store architecture is well described in two layers:
- The logical: Accounts (paying customers), Containers (folders) and Objects (blobs).
- The physical: Zones (Independent clusters), Partitions (of data items), Rings (mapping between URLs and partitions and locations on disks) and Proxies.
- Account is actually an independent tenant, as it has its own data store (implemented by SQLite).
- Replication is done based on large blocks, quorum and MD5.
- Write to Disk before Expose to Users: When files are uploaded, they are first committed to disk at least at two zones, and then database is updated for availability (so don't expect sub second response for a write).
- Object Store Server Sizing: High capacity storage (36-48 2-3TB SATA drivers that will provide us up to 100TB per server), memory to cache the head of the long tail (24-48GB RAM), 2x1Gbps Eth to support ~500 concurrent requests for long tail request. A single high end CPU should do the work.
- Proxy Server Sizing: Little storage (500GB SATA disk will do the work), memory to cache the head of the long tail (24 RAM), 2x10Gbps Eth to support ~5000 concurrent requests to support the head of the long tail requests.
- Switches: you will need a nice backbone for this system. In order to avoid a backbone that is too large, splitting the system to several clusters is recommended.
- Load Balancing: in order to avoid high end LB, you should use DNS LB, where frequent calls to the DNS are neglect-able relatively to the Media streaming
OpenStack Object Store is probably cost effective when looking for large installations as you may need at least 5 physical servers for object and containers store and another 2 for proxies.
However, you can check the solution based on a single server installation (SAIO):
If you take the fast lane and AWS is the fast lane to your POC, feel free to use the following tips:
- In the initial installation, some packages will miss in yum so:
- sudo easy_install eventlet
- sudo easy_install dnspython
- sudo easy_install netifaces
- sudo easy_install pastedeploy
- No need to start the rsync service (just reboot the machine).
- Start the service using sudo ./bin/startmain
- Test the service using the supload bash script to stimulate a client.
There are 3 main ways to work with Swift web services:
- AWS tools, as Swift is compliant with AWS S3.
- HTTP calls as it is based on HTTP.
- Swift client that streams your major needs.
swift -A http://test.example.com:8080/auth/v1.0 -U test:tester -K testing statUpload a file to the videos container:
sudo swift -A http://test.example.com:8080/auth/v1.0 -U test:tester -K testing upload videos ./demo.wvm
sudo swift -A http://test.example.com:8080/auth/v1.0 -U test:tester -K testing post videos -r '.r:*'Download the file (where AUTH_test is the user account, videos is the containter and anonymous access was provided as detailed below):
In order to implement a read only public access your will need to take care of the following items
[filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory # Delaying the auth decision is required to support token-less # usage for anonymous referrers (‘.r:*’). delay_auth_decision = 1Working with Direct HTTP Calls
curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://test.example.com:8080/auth/v1.0 > X-Storage-Url: http://127.0.0.1:8080/v1/AUTH_test > X-Auth-Token: AUTH_tk551e69a150f4439abf6789409f98a047 > Content-Type: text/html; charset=UTF-8 > X-Storage-Token: AUTH_tk551e69a150f4439abf6789409f98a047Upload file
curl –X PUT -i \ -H "X-Auth-Token: AUTH_tk26748f1d294343eab28d882a61395f2d" \ -T /tmp/a.txt \ https://storage.swiftdrive.com/v1/CF_xer7_343/dogs/JingleRocky.jpgBottom Line
OpenStack Object Store (Swift) is exciting tool for anyone who is working with large scale system and especially when talking about CDNs.
Published at DZone with permission of Moshe Kaplan , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.