We spent some time recently looking into a lot of our old design decisions. Some of them make very little sense today (JSON vs. blittalbe as a good example), but made perfect sense at the time, and were essential to actually getting the product out.
Some of those design decisions, however, are still something that I very firmly believe in. This series of posts is going to explore those decisions, their background and how they played out in the real world. So, without further ado, let us talk about unbounded result sets.
The design of RavenDB was heavily influenced by my experience as That NHibernate Guy (I got started with NHibernate over a decade ago, if you can believe that), where I saw the same types of error, repeated over and over again. I then read Release It!, and I suddenly discovered that I wasn’t alone fighting those kind of demons. When I designed RavenDB, I set out explicitly to prevent as many of those as I possibly could.
One of the major issues that I wanted to address was Unbounded Result Sets, simply put, this is when you have:
SELECT * FROM OrderLines WHERE OrderID = 1555
And you don’t realize that this order has 3 million line items (or, which is worse, that most of your orders have a few thousands line items, so you are generating a lot of load on the database, only to throw most of them away).
In order to prevent this type of issue, RavenDB has the notion of mandatory page sizes.
- On the client side, if you don’t specify a limit, we’ll implicitly add one (128 by default).
- On the server side, there is a database-wide maximum page size (by default set to 1024). The server will trim all page sizes to the max if they are larger.
I think that this is one of the more controversial decisions in RavenDB design, and one that got a lot of heated discussion. But I still think that this is a good idea because I have seen what happens when you don’t do that.
And the arguments are mostly about “RavenDB should trust developers to know what they are doing,” and a particularly irate guy called me while I was out shopping to complain how I broke the sacred contract of Linq with regards to “queries should return all by default, even if this is 10 billion results”. I pointed out that this is actually configurable, and if he wanted to set the default to any size he wanted, he could do that, but apparently it is supposed to be “shoot my own foot first, then think” kind of deal.
Even though that I still think that this is a really good idea, we have added some features over the years to make it easy for people to access the entire dataset when they need it. Streaming has been around since 2.5 or so, giving you a dedicated API to stream unbounded results. Streams were built to make it efficient to process large sets of data, and they allow both client and server to process the data in parallel, instead of batching huge responses on the server, then consuming ridiculous amounts of memory on the client before giving you the full result set. Instead, you can get each result as soon as it arrive from server, and you can process it and send it further.
In 4.0, we are going to change the behavior of the paging limits so:
- If you don’t specify a limit, we’ll supply a limit clause of 25 items. If there are more than 25 items, we’ll throw an exception (unless you asked otherwise in the conventions).
- If you supply a limit explicitly, it will work as expected and page through the data.
The idea is that we want to reduce the surprise for users, and that can give them the experience to draw upon early on. Another thing that we’ll do is make sure that the operations guys can also change that, likely with an environment variable or something like that. If you need to modify the conventions on the fly, you usually have hard time deploying a new version, and an immediate action is needed.
In this manner, we can help users avoid expensive requests to the server, and they can be explicit with what they need to do.