Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Transactions, Request Processing and Convoys, Oh My!

DZone's Guide to

Transactions, Request Processing and Convoys, Oh My!

We take a look at how the RavenDB team setup tests to simulate how low-grade hardware would handle heavy traffic loads. Read on for the details.

Free Resource

RavenDB vs MongoDB: Which is Better? This White Paper compares the two leading NoSQL Document Databases on 9 features to find out which is the best solution for your next project.  

You might have noticed that we have been testing, quite heavily, what RavenDB can do. Part of that testing focused on running on barely adequate hardware at ridiculous loads.

Because we set up the production cluster like drunk monkeys, a single high load database was able to affect all other databases.  To be fair, I explicitly gave the booze to the monkeys and then didn’t look when they made a hash of it, because we wanted to see how RavenDB does when setup by someone who has as little interest in how to properly set up things and just want to Get Things Done.

The most interesting part of  this experiment was that we had a wide variety of mixed workload on the system. Some databases are primarily read-heavy, some are used infrequently, some databases are write-heavy and some are both write-heavy and indexing-heavy. Oh, and all the nodes in the cluster were setup identically, with each write and index going to all the nodes.

That was really interesting when we started very heavy indexing loads and then pushed a lot of writes. Remember, we intentionally under-provisioned, and these machines are 2 cores with 4GB RAM and they were doing heavy indexing, processing a lot of reads and writes, the words.

Here is the traffic summary from one of the instances:

image

A very interesting wrinkle that I didn’t mention is that this setup has a really interesting property that we never tested. It has fast I/O, but the number of tasks that are waiting for the two cores is high enough that they don’t get a lot of CPU time on an individual basis. What that means is that it looks like we have I/O that is faster than the CPU.

A core concept of RavenDB performance is that we can send work to the disk to run and in that timeframe we will be able to complete more operations in memory, then send a batch of them to disk. Rinse, repeat, etc.

This wasn’t the case here. By the time we finished a single operation we’ll already have the previous operation completed, and so we’ll proceed with a single operation every time. That killed our performance, and it meant that the transactions merger queue would grow and grow.

We fixed things so we’ll take into account the load on the system when this happens, and we gave a higher priority to the transaction merging thread over normal request processing or indexing. This is because write requests can’t complete before the transaction has been committed, so obviously we don’t want to process further requests at the expense of processing the current requests.

This problem can only occur when we are competing heavily for CPU time, something that we don’t typically see. We are usually constrained a lot more by network or disk. With enough CPU capacity, there is never an issue of the requests and the transaction merger competing for the same core and interrupting each other, so we didn’t consider this scenario.

Another problem we had was the kind of assumptions we made with regards to the processing power. Because we tested on higher-end machines, we tested with some ridiculous performance numbers, including hundreds of thousands of writes per second, indexing, mixed read/write load, etc. But we tested that on high end hardware, which means that we got requests that completed fairly quickly. And that led to a particular pattern of resource utilization. Because we reuse buffers between requests, it is typically better to grab a single big buffer and keep using it, rather than having to enlarge it between requests.

If your requests are short, the number of requests you have in flight is small, so you get great locality of reference and don’t have to allocate memory from the OS so often. But if that isn’t the case…

We had a lot of requests in flight. A lot of them because they were waiting for the transaction merger to complete its work, and it was having to fight the incoming requests for some CPU time. So we have a lot of inflight requests, and they intentionally got a bigger buffer than they actually needed (pre-allocating). You can continue down this line of thought, but I’ll give you a hint, it ends with KABOOM.

All in all, I think that it was a very profitable experiment. This is going to be very relevant for users on the low end, especially those running Docker instances, etc. But it should also help if you are running production worthy systems and can benefit from higher utilization of the system resources.

Do you pay to use your database? What if your database paid you? Learn more with RavenDB.

Topics:
cpu ,transactions ,testing ,processing ,request ,qa ,performance ,database ,ravendb

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}