In my last post on the topic, I discussed physically separating documents of different collections. This post is about the same concept but applied at a much higher level. In RavenDB, along with the actual indexing data, we also need to keep track of quite a few details: what did we last index, what is our current state, any errors that happened, keep track of referenced documents, etc. For map/reduce indexes, we have quite a bit more data that we need to work with, all the intermediate results of the map/reduce process, along with bookkeeping information about how to efficiently reduce additional values, etc.
All of that information is stored in the same set of files as the documents themselves. As far as the user is concerned, this is mostly relevant when we need to delete an index. Because on large databases, the deletion of a big index can take a while, this was an operational issue. In RavenDB 3.0 we changed things so index deletion would be async, which improved matters significantly. But on large databases with many indexes, that still got us into problems.
Because all the indexes were using the same underlying storage, that meant that the number of values that we had to track was high. And it was proportional to the number of indexes and the amount of documents they indexed. That means that in a particular database with a hundred million documents, and three map/reduce indexes, we had to keep track of over half a billion entries. B+Trees are really amazing creatures, but one of their downsides is that once they get to a certain size, they slow down as the cost of traversing the tree become very high.
In relational terms, we put all the indexing data into a single table and had an IndexId column to distinguish between the different records. And once the table got big enough, we had issues.
One of the design decisions we made in the build up to RavenDB 4.0 was to remove multi-threaded behavior inside Voron. This led to an interesting problem since everything was in the same Voron storage: we wouldn’t be able to index and accept new documents at the same time (I’ll have another post about this design decision).
The single threaded nature and the problems with index deletion has led us toward an interesting decision. A RavenDB database isn’t actually composed from a single Voron storage. It is composed of multiple of those, each of them operating independently of one another.
The first one, obviously, is for the documents. But each of the indexes now has its own Voron storage. This means that they are totally independent from one another, which leads to a few interesting implications:
- Deleting an index is as simple as shutting down the indexing for this index and then deleting the Voron directory from the file system.
- Each index has its own independent data structures, so having multiple big indexes isn’t going to cause us to pay the price of all of them together.
- Because each index has a dedicated thread, we aren’t going to see any complex coordination between multiple actors needing to use the same Voron storage.
This is important, because in RavenDB 4.0, we are also storing the actual Lucene index inside the Voron storage, so the amount of work that we now require it to deal with is much higher. By splitting it along each index line, we have saved ourselves a whole bunch of headaches on how to manage them properly.
As a reminder, we have the RavenDB Conference in Texas shortly, which would be an excellent opportunity to discuss RavenDB 4.0 and see what we already have done.