Raven's Storage: Memtables are Tough
Join the DZone community and get the full member experience.Join For Free
memtables are conceptually a very simple thing. you have the list of values that you were provided, as well as a skip list for searches.
- memtables are meant to be used concurrently.
- we are going to have to have to hold all of our values in memory. and i am really not sure that i want to be doing that.
- when we switch between mem tables (and under write conditions, we might be doing that a lot), i want to immediately clear the used memory, not wait for the gc to kick in.
the first thing to do was to port the actual skiplist from the leveldb codebase. that isn’t really hard, but i had to make sure that assumptions made for the c++ memory model are valid for the clr memory model. in particular, .net doesn’t have atomicpointer, but volatile.read / volatile.write are a good replacement, it seems. i decided to port the one from leveldb because i don’t know what assumptions other list implementations have made. that was the first step in order to create a memtable. the second was to decide where to actually store the data.
here is the most important method for that part:
public void add(ulong seq, itemtype type, slice key, stream val)
the problem is that we cannot just reference this. we have to copy those values into memory that we control. why is that? because the use is free to change the stream contents or the slice’s array as soon as we return from this method. by the same token, we can’t just batch this stuff in memory, again, because of the loh. the way this is handled in leveldb never made much sense to me, so i am going to drastically change that behavior.
in my implementation, i decided to do the following:
- copy the keys to our own buffer, and keep them inside the skip list. this is what we will use for actually doing searches.
- change the skiplist to keep track of values, as well as the key.
- keep the actual values in unmanaged memory, instead of managed memory. that avoid the whole loh issue, and give me immediate control on when the memory is disposed.
this took some careful coding, because i want to explicitly give up on the gc for this. that means that i need to make damn sure that i don’t have bugs that would generate memory leak.
each memtable would allocate 4mb of unmanaged memory, and would write the values to it. note that you can write over 4mb (for example, by writing a very large value, or by writing a value whose length exceed the 4mb limit. at that point, we would allocate more unmanaged memory, and hand over the memory table to compaction.
the whole thing is pretty neat, even if i say so myself .
Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
MLOps: Definition, Importance, and Implementation
What Is mTLS? How To Implement It With Istio
Five Java Books Beginners and Professionals Should Read
Microservices Decoded: Unraveling the Benefits, Challenges, and Best Practices for APIs