Excerpts from the RavenDB Performance team report: Optimizing Memory Compare/Copy Costs
Join the DZone community and get the full member experience.Join For Free
note, this post was written by federico . where i had notes or stuff to extend, i explicitly marked it as such.
tldr: optimizing at this level is really hard. to achieve gains of 20%+ for compare and from 200% to 6% in copy (depending on the workload) we will need to dig very deep at the il level.
another area we looked deeply into is, how do we move and access the memory. this type of optimization work is especially relevant if you are using voron to handle big workloads. with small databases the improvements can be felt, but where they shine is when dealing with multi-gigabyte databases or high-throughput key-value retrieves and put workloads (did anyone thought bulk-inserts?).
using freedb as in this previous post we build an experiment which we could use to pinpoint the pain points making sure we could also repro it every single time (no indexes, no extra call around). under the microscope 2 pain points were evident when dealing with memory. comparing and moving memory around.
we usually compare memory straight from the mmaped unmanaged memory when looking for a particular document in voron trees; and to copy from and to voron pages when storing and retrieving documents. these are very core operations for any storage engine, voron is not an special case. before we started the optimization effort we already had a pretty optimized routine .
what this method does is:
- if the memory blocks have zero size, there is no doubt they are equal.
- if the memory blocks are bigger than the size of a word (32 bits) we do a pre-compare over the aligned memory blocks (for performance) in order to rule out all the equals.
- as we cannot use words to calculate the output (handling the endianness would cost us), we do a byte by byte comparison for the final check.
for our insert workload we were roughly in the 97.5 nanoseconds per memory compare in average. to put in context, if each assembler instruction could be executed in exactly 1 cycle (which usually is not true) then 3 instruction is an entire nanosecond, therefore our average instruction budget is 291 instructions. remember this idealized processor, we will use this same comparison later for more complex analysis.
memory compares can be of different sizes that is why controlling the environment is very important for this type of optimization work.
to deal with that and we were using many tricks from the optimization book. from ensuring that memory alignment is optimal to batch compares with bigger primitive sizes to pointer arithmetic. at first sight this one is the kind of method you won't optimize at all , it is pretty damn tight.
ayende’s node – we have already done a optimization step on memory comparisons. we initially just shelled out to the native memcmp method, but the cost of doing a pinvoke call ended up being noticable, and we wrote the previously optimized version (and had several rounds of that) to alleviate that cost.
however, we took to the challenge because the payoff can be huge. for a very small bulk insert of 50,000 documents inserted in an empty database, we are talking about in the ballpark of 5 million compares (yeah you read it right). even if we manage to squeeze 1% off, the sheer volume of calls will make it worthwhile. to achieve that we had to do the unthinkable, we had to resort to dig into the msil that method was generating. armed with ilspy we found out we may have a way to shave off some inefficiencies.
here is the what this look like when we start actually putting analysis to action. you can see the method code (after decompilation, so we can be closer to the il) as well as the issues that were discovered in the process.
because of the size of the method the fastest way was to resort to use a c# decompile, even though we then matched it with the generated il. the trick to use the c# decompiled version requires that we use a decompiler that is not too smart when dealing with the code. if the decompiler would have understood what was the original code intention and acted upon it, we would have never spotted some of the optimizations at this level. for example, the last loop decompiled with jetbrains dotpeek would look like this:
always keep around an old version of a decompiler just in case you may need it .
ayende’s note: in most cases, you can select the level of details that a decompiler can give you. with reflector, for example, you can select how deeply it will decompile things, but even so, doing stupid decompilation can be very beneficial by showing us what is actually going on.
understanding where the inefficiencies may be, is one thing, being able to prove them is another matter. and we will tackle all of them in future posts.
we will also leave the memcpy analysis for later because it builds on the optimizations used in memcmp and also requires a deep analysis of the buffer.memcpy method already available in the .net framework (for internal use of course).
if what we did to the poor etags was evil. you are now arriving at the gates of the underworld.
ayende’s note: this is a pretty complex topic, and it goes on for quite a while. in order to maintain interest, and to avoid having people getting lost in the details, i broke it apart for several posts. in the meantime, given the details in this post, how would you suggest improving this?
Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.