How do they DO this?
Join the DZone community and get the full member experience.Join For Free
as mentioned, we are doing some more performance work in voron. and we got some really surprising results there. voron is writing at really good rate, (better than anything else we tested against), just not a good enough rate.
to be fair, if we haven’t seen the esent benchmark with close to 750k writes / second, we might have been happy, but obviously it is possible to be much faster than we are now. so i decided to figure it out.
to start with, i run voron through a profiler, and verified that the actual cost there was purely in calling flushfilebuffers (the windows version to fsync). in fact, in our tests, about 75% of the time was spent just calling this function. the test in questions does 1 million inserts, using 10,000 transactions of 100 items each. but esent can basically do so many it doesn’t even count. so how do they do that?
i’m going to dedicate this post to discussing the process for finding it out, then spend the next one or two discussing the implications. at this level, we can’t really use something like a profiler to figure out what is wrong, we need a more dedicated tool. and in this case, we are talking about process monitor. it gives you the ability to see what system calls are being made on your system.
here is what it looks like when we are committing a transaction with voron:
and here is what it looks like when we are committing a transaction with esent:
i was curious to test sql server too, and here is what it looks like when sql server is committing a transaction:
and if i’m already doing this, here is sql ce transaction commit:
no, this isn’t a mistake. it didn’t do anything. by default, sql ce only flushes to memory. you have to force it to flush to disk my using tx.commit(commitmode.immediate); if you do that, the transaction commits looks like this:
not a mistake, you still get nothing. it appears that even with immediate, it is only writing to disk when it feels like it. at a guess, it is using memory mapped files and doing flushviewoffile, instead of calling flushfilebuffers, but i am not really sure.
since i run the benchmarks without immediate, i decided to try running the sql ce stuff there again. here are the numbers:
this brings to mind an interesting questions, what the hell is it doing that takes so long if it doesn’t even flush to disk?
anyway, let us look at the sqlite version:
and… i don’t really know how to comment on that, to tell you the truth. i can’t figure out what it is doing, and i probably don’t really want to.
now, let us look at lmdb:
i am not really sure how to explain the amount of work done here. i think that work because it uses manual file i/o. when i use the writemap option, i get:
which is more reasonable and expected.
i would have shown leveldb as well, but i can’t run it on windows.
i think that this is enough for now. i’ll discuss the implications of the difference in behavior in my next post. in the meantime, i would love to know what you think about this.
Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Revolutionizing Algorithmic Trading: The Power of Reinforcement Learning
Building a Flask Web Application With Docker: A Step-by-Step Guide
5 Key Concepts for MQTT Broker in Sparkplug Specification