Over a million developers have joined DZone.

Reviewing LevelDB, Part III: WriteBatch Isn't What You Think it is

DZone's Guide to

Reviewing LevelDB, Part III: WriteBatch Isn't What You Think it is

· Database Zone ·
Free Resource

Download "Why Your MySQL Needs Redis" and discover how to extend your current MySQL or relational database to a Redis database.

One of the key external components of leveldb is the idea of WriteBatch. It allows you to batch multiple operations into a single atomic write.

It looks like this, from an API point of view:

    leveldb::WriteBatch batch;


    batch.Put(key2, value);

    s = db->Write(leveldb::WriteOptions(), &batch);

As we have learned in the previous post, WriteBatch is how leveldb handles all writes. Internally, any call to Put or Delete is translated into a single WriteBatch, then there is some batching involved across multiple batches, but that is beside the point right now.

I dove into the code for WriteBatch, and immediately I realized that this isn’t really what I bargained for. In my mind, WriteBatch was supposed to be something like this:

        public class WriteBatch


           List<Operation> Operations;


Which would hold the in memory operations until they get written down to disk, or something.

Instead, it appears that leveldb took quite a different route. The entire data is stored in the following format:

        // WriteBatch::rep_ :=

        //    sequence: fixed64

        //    count: fixed32

        //    data: record[count]

        // record :=

        //    kTypeValue varstring varstring         |

        //    kTypeDeletion varstring

        // varstring :=

        //    len: varint32

       //    data: uint8[len]

This is the in memory value, mind. So we are already storing this in a single buffer. I am not really sure why this is the case, to be honest.

WriteBatch is pretty much a write only data structure, with one major exception:

        // Support for iterating over the contents of a batch.

        class Handler {


          virtual ~Handler();

          virtual void Put(const Slice& key, const Slice& value) = 0;

          virtual void Delete(const Slice& key) = 0;


        Status Iterate(Handler* handler) const;

You can iterate over the batch. The problem is that we now have this implementation for Iterate:

        Status WriteBatch::Iterate(Handler* handler) const {

          Slice input(rep_);

          if (input.size() < kHeader) {

            return Status::Corruption("malformed WriteBatch (too small)");




          Slice key, value;

          int found = 0;

         while (!input.empty()) {


           char tag = input[0];


           switch (tag) {

             case kTypeValue:

               if (GetLengthPrefixedSlice(&input, &key) &&

                   GetLengthPrefixedSlice(&input, &value)) {

                 handler->Put(key, value);

               } else {

                 return Status::Corruption("bad WriteBatch Put");



             case kTypeDeletion:

               if (GetLengthPrefixedSlice(&input, &key)) {


               } else {

                 return Status::Corruption("bad WriteBatch Delete");




               return Status::Corruption("unknown WriteBatch tag");



         if (found != WriteBatchInternal::Count(this)) {

           return Status::Corruption("WriteBatch has wrong count");

         } else {

           return Status::OK();



So we write it directly to a buffer, then read from that buffer. The interesting bit is that the actual writing to leveldb itself is done in a similar way, see:

    class MemTableInserter : public WriteBatch::Handler {


      SequenceNumber sequence_;

      MemTable* mem_;


      virtual void Put(const Slice& key, const Slice& value) {

        mem_->Add(sequence_, kTypeValue, key, value);



     virtual void Delete(const Slice& key) {

       mem_->Add(sequence_, kTypeDeletion, key, Slice());





   Status WriteBatchInternal::InsertInto(const WriteBatch* b,

                                         MemTable* memtable) {

     MemTableInserter inserter;

     inserter.sequence_ = WriteBatchInternal::Sequence(b);

     inserter.mem_ = memtable;

     return b->Iterate(&inserter);


As I can figure it so far, we have the following steps:

  • WriteBatch.Put / WriteBatch.Delete gets called, and the values we were sent are copied into our buffer.
  • We actually save the WriteBatch, at which point we unpack the values out of the buffer and into the memtable.

It took me a while to figure it out, but I think that I finally got it. The reason this is the case is that leveldb is a C++ application. As such, memory management is something that it needs to worry about explicitly.

In particular, you can’t just rely on the memory you were passed to be held, the user may release that memory after they called to Put. This means, in turn, that you must copy the memory to memory that leveldb allocated, so leveldn can manage its own lifetime. This is a foreign concept to me because it is such a strange thing to do in the .NET land, where memory cannot just disappear underneath you.

On my next post, I’ll deal a bit more with this aspect, buffers management and memory handling in general.


Read "Developing Apps Using Active-Active Redis Enterprise" and discover the advantages over other active-actve databases.


Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}