Over a million developers have joined DZone.

Introducing TokuMX Clustering Indexes for MongoDB

DZone 's Guide to

Introducing TokuMX Clustering Indexes for MongoDB

· Java Zone ·
Free Resource

Since introducing TokuMX, we’ve discussed benefits that TokuMX has for existing MongoDB applications that require no changes. In this post, I introduce an extension we’ve made to the indexing API: clustering indexes, a tool that can tremendously improve query performance. If I were to speak to someone about clustering indexes, I think the conversation could go something like this…

What is a Clustering Index?

A clustering index is an index that stores the entire document, not just the defined key.

A common example is InnoDB’s primary key in MySQL. InnoDB stores the entire row in the primary key, therefore making it clustering. MongoDB has no clustering indexes, as the entire document is stored in a data heap.

By default, TokuDB’s primary key is clustering just like InnoDB, and TokuMX’s _id index is clustering. For more details on MongoDB, MySQL, and TokuMX’s storage layout, read this post.

What are TokuMX Clustering Indexes?

Like TokuDB, TokuMX gives users the option of making any secondary index a clustering index. The user has the power to have any index store a copy of the associated document that the index entry belongs to. Unlike InnoDB which allows only one clustering index, or MongoDB which has zero, TokuMX allows users to have as many clustering indexes as they please.

How do I create a Clustering Index?

To create a clustering index simply pass the option { clustering : true } to the ensureIndex command. Here is how to make a clustering index on fields ‘a’ and ‘b’ on collection foo:

 db.foo.ensureIndex( { a : 1 , b : 1 } , { clustering : true } )

Why should I bother with Clustering Indexes?

Two big reasons: (much) faster queries, and (significantly) reduced I/O. In fact, the MySQL manual puts it best:

Accessing a row through the clustered index is fast because the row data is on the same page where the index search leads. If a table is large, the clustered index architecture often saves a disk I/O operation when compared to storage organizations that store row data using a different page from the index record (Zardosht’s note: like MongoDB).

A range query on a clustering index sequentially accesses data on disk to answer the query. This is very fast and I/O efficient. On the other hand, a range query on a non-clustering index needs to perform random I/O to retrieve every document from the main data store. The query may perform one I/O to retrieve the data from the secondary index, but hundreds of additional I/Os to retrieve each entire document. Clustering indexes remove these hundreds of additional I/Os.

In short, all queries on a clustering index are covered, so queries will be much faster.

Much faster queries? Any proof?

Here is a benchmark we ran back when TokuMX was in the “experimental integration” phase. The results still hold and speak for themselves.

But wait, do I really need a Clustering Index? Can’t I just make all indexes covering?

Well, theoretically yes… if you want a rigid schema. Think about it. To design a covering index, you need to know all fields your queries will access. Any unforeseen field that is queried will make the index non-covering and query time slow. So, to have all indexes covering, you need to have no unforeseen fields. This smells like a rigid schema.

This flies in the face of arguably the biggest strength of MongoDB’s document model, the ability to adapt to unforeseen changes in the user’s data model. Many users pick MongoDB because they don’t know what fields their data will have over time, and therefore they don’t know what fields their queries will want to access. So yes, you may be able to make every MongoDB index a covering index, but you may end up sacrificing some of the agility that drew you to MongoDB in the first place.

This makes clustering indexes very appealing in MongoDB, arguably more so than in MySQL. Clustering indexes are agile. They survive changes to data model, covering indexes don’t.

Sounds great! Should I make every index clustering?

Well… maybe. It depends. Some users who run TokuMX for the wonderful compression are trying to save as much space as possible. Because every clustering key stores an additional copy of the entire document, these users probably don’t want clustering keys. Also, despite TokuMX’s wonderful write performance, write speed will be a little slower because there is more data to write.

But, if queries are your bottleneck (especially range queries), then clustering keys are tremendously helpful. TokuMX’s compression and write performance can offset the increased space usage and write load to make the tradeoff very appealing.

Why don’t MongoDB or MySQL have clustering indexes?

Don’t know. Users have requested this. The lack of compression and B-tree write performance makes managing clustering indexes more expensive, but they could still implement this feature.

Give me an example of an application that would benefit from a clustering index.

Dwight Merriman, chairman and co-founder of MongoDB Inc just blogged about a very relevant use case. I encourage you to read the post. The example he provides is:

Suppose we have temperature measurements, for different devices, over time.  We anticipate wanting to look up these measurements for a given device and a given time range.

More details he provides on the “measurements” collection:

  • documents are very small
  • There may be “a lot (e.g. millions) of measurements for one device”

He suggests a solution of using the following index:


But he mentions the drawbacks that would come with this index: point queries to the main data store are required to retrieve the data. To get around the drawbacks, he mentions other workarounds that fall on the developer to implement. These workarounds sound hard.

A simple clustering key on { device : 1 , when : 1 } works well here. Queries will be a covering range query, so IOPS are minimized and execution is fast.

Additionally, the cache is efficiently used, which is a problem presented in that blog post. The range query brings into cache only the data that is returned to the user. With a non-clustering index, the returned documents likely reside throughout disk, and retrieving each document means bringing into memory other data that happens to reside near it. This other data wastes cache space.


Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}