Take the following scenario. You have a time-series data application for which you would like to store a rolling period of data. For example, you may want to maintain the last six months of traffic logs for a website, in order to analyze activity of different periods of time. Or, you have an application maintaining the last year’s worth of logs of financial services trades data, where each trade has a timestamp. The general problem is that as new data is inserted for the current time, old data is constantly being removed from the collection to maintain this rolling window.
With basic MongoDB, you would likely create a collection with a “TTL”, or “time to live” index. You insert data with the timestamp, define an index that specifies how long data should live, and a background thread will delete your old data on a document by document basis.
While simple to use, this solution can run into performance problems. The problem with TTL indexes is that documents are deleted on a one-by-one basis. Each document and its associated secondary index entries are manually deleted by this background thread. With MongoDB’s database level lock, this causes extra contention. Even if MongoDB’s locking were not an issue (like TokuMX’s is not an issue), this work may still be significant, because deleting documents can be just as expensive as inserting them is. Also, with MongoDB, these constant deletions lead to fragmentation of your data. This problem is described by users here and here.
Some say that instead of using TTL indexes, you can use capped collections, which are supposed to be faster. The problem is that for data to be queryable in any meaningful way, indexes need to exist. If indexes exist, then capped collections have the same challenge as TTL indexes: they delete documents individually from secondary indexes, thereby hurting performance.
This is a big motivation why TokuMX 1.5 introduces partitioned collections. Partitioned collections are designed to allow users to delete big chunks of their data instantaneously by dropping partitions. This is also a reason why Oracle, MySQL, Postgres, and SQL Server support partitioned tables.
To solve this problem with partitioned collections in TokuMX, do the following:
- Create a partitioned collection with a primary key that is the same as your TTL index would be (but keep in mind the primary key needs an _id appended at the end)
- Select some granularity for which you would like to delete data. Do you want to delete data on a monthly basis? weekly basis? Don’t make the granularity so fine that you end up with many (say 1000) partitions. This defines how often you manually add partitions.
- On a periodic, but infrequent basis, drop the partition that stores the “expired” data.
While this requires a little more manual work (namely, the manual adding and dropping of partitions), you get better performance, significantly reduced fragmentation, and hopefully a smoother user experience.
An important note to take: partitioned collections work only in non-sharded clusters. If you need to shard your data, then stick with TTL indexes. In a future post, I will explain why partitioned collections do not work in sharded clusters.