Rob’s Sprint: Query optimizer jumped a grade
Join the DZone community and get the full member experience.Join For Free
ravendb’s query optimizer is pretty smart, it knows how to find the appropriate index for your queries, and even create a new index to match your query if it didn’t exist. but that was the limits of its abilities. a human could still go into the database and say, look at those:
those all operate on posts, and you should be able to merge them all into a single index. reducing the number of indexes is a good thing, as it reduces the amount of io on the system, which is typically our limiting factor.
now, there was no real reason why we couldn’t actually tell the query optimizer that it should be smart enough that when it creates a new index, it will use all of the properties that have been previously indexed.
however, doing so would actually make no difference to us. because until now, we didn’t have a way to stop an index. with the new index idling feature, we can now have the query optimizer create a new merged index, and then the database will just mark the extra index as idle after a while.
almost, there is still another issue that we have to resolve. what happens when we have a big database, and we introduce a new (and wider) index? by default, all matching queries would actually hit that index, and not the previously existing index. that is great, except… the new index is stale, and might remain stale for a few minutes. during that time, we have a perfectly servicable index that is just sitting there.
the query optimizer can now take into account the staleness level of an index as well when selecting it, meaning that there should be no interruption from the point of view of other queries. the new index will be introduced, go through all the documents, and then take over as the serving index for all queries. the existing index will wither away and die.
Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.