Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

With Malice Aforethought, We Can Try Even Better

DZone's Guide to

With Malice Aforethought, We Can Try Even Better

Free Resource

Container Monitoring and Management eBook: Read about the new realities of containerization.

Continuing on with the same theme from our last post, how can we improve the speed in which we write to disk? In particular, I am currently focused on my worst-case scenario:

    fill rnd buff 10,000 tx            :    161,812 ms      6,180 ops / sec

This is 10,000 transactions all running one after another, and taking really way too long to go about doing their thing. Now, we did some improvements and we got it all the way to 6,340 ops per sec, but I think you’ll agree that even this optimization is probably still bad. We spent more time there, trying to figure out exactly how we can do micro optimizations, and we got all the way up to 8,078 ops per sec.

That is the point where I decided that I would really like to look at the raw numbers that I can get from this system. So I wrote the following code:

 var key = Guid.NewGuid().ToByteArray();
 var buffer = new byte[100];
 new Random().NextBytes(buffer);
  
 using (var fs = new FileStream("test.bin", FileMode.Truncate, FileAccess.ReadWrite))
 {
     fs.SetLength(1024*1024*768);
  
     var sp = Stopwatch.StartNew();
  
     for (int i = 0; i < 10*1000; i++)
    {
         for (int j = 0; j < 100; j++)
         {
             fs.Write(key,0, 16);
             fs.Write(buffer, 0, 100);
         }
         fs.Flush(true);
     }

    Console.WriteLine("{0:#,#} ms for {1:#,#} ops / sec", sp.ElapsedMilliseconds, (1000*1000)/sp.Elapsed.TotalSeconds);
 }

This code mimics the absolute best scenario we could hope for. Zero cost for managing the data, pure sequential writes. Note that we call to Flush(true) to simulate 10,000 transactions. This code gives me: 147,201 ops per sec.

This is interesting, mostly because I thought the reason our random writes with 10,000 transactions were bad was the calls to Flush(), but it appears that this is actually working very well. I then tested this with some random writes, by adding the following lines before line 13:

var next = random.Next(0, 1024*1024*512);
 fs.Position = next - next%4096;

I then decided to try it with memory mapped files, and I wrote:

using (var fs = new FileStream("test.bin", FileMode.Truncate, FileAccess.ReadWrite))
 {
     fs.SetLength(1024 * 1024 * 768);
  
     var memoryMappedFile = MemoryMappedFile.CreateFromFile(fs,
                                     "test", fs.Length, MemoryMappedFileAccess.ReadWrite,
                                     null, HandleInheritability.None, true);
     var memoryMappedViewAccessor = memoryMappedFile.CreateViewAccessor();
  
     byte* p = null;
     memoryMappedViewAccessor.SafeMemoryMappedViewHandle.AcquirePointer(ref p);
  
     var sp = Stopwatch.StartNew();
  
     for (int i = 0; i < 10 * 1000; i++)
     {
         var next = random.Next(0, 1024 * 1024 * 512);
         byte* basePtr = p + next;
         using (var ums = new UnmanagedMemoryStream(basePtr, 12 * 1024,12*1024, FileAccess.ReadWrite))
         {
             for (int j = 0; j < 100; j++)
             {
                 ums.Write(key, 0, 16);
                 ums.Write(buffer, 0, 100);
             }
         }
     }
     Console.WriteLine("{0:#,#} ms for {1:#,#} ops / sec", sp.ElapsedMilliseconds, (1000 * 1000) / sp.Elapsed.TotalSeconds);
 }

You’ll note that I am not doing any flushing here. That is intention for now, using this, I am getting 5 million+ ops per second. But since I am not doing flushing, this is pretty much me testing how fast I can write to memory.

Adding a single flush cost us 1.8 seconds for a 768 MB file. And what about doing the right thing? Adding the following in line 26 means that we are actually flushing the buffers.

FlushViewOfFile(basePtr, new IntPtr(12 * 1024));

Note that we are not flushing to disk, we still need to do that. But for now, let's try doing it. This single line changed the code from 5 million+ ops to doing 170,988 ops per sec. And that does NOT include actual flushing to disk. When we do that, too, we get a truly ridiculous number: 20,547 ops per sec. And that explains quite a lot, I think.

For reference, here is the full code:

 unsafe class Program

   {
     [DllImport("kernel32.dll", SetLastError = true)]
     [return: MarshalAs(UnmanagedType.Bool)]
     extern static bool FlushViewOfFile(byte* lpBaseAddress, IntPtr dwNumberOfBytesToFlush);
  
     static void Main(string[] args)
     {
         var key = Guid.NewGuid().ToByteArray();
         var buffer = new byte[100];
         var random = new Random();
         random.NextBytes(buffer);
  
         using (var fs = new FileStream("test.bin", FileMode.Truncate, FileAccess.ReadWrite))
         {
             fs.SetLength(1024 * 1024 * 768);
  
             var memoryMappedFile = MemoryMappedFile.CreateFromFile(fs,
                                             "test", fs.Length, MemoryMappedFileAccess.ReadWrite,
                                             null, HandleInheritability.None, true);
             var memoryMappedViewAccessor = memoryMappedFile.CreateViewAccessor();
  
             byte* p = null;
             memoryMappedViewAccessor.SafeMemoryMappedViewHandle.AcquirePointer(ref p);
  
             var sp = Stopwatch.StartNew();
  
             for (int i = 0; i < 10 * 1000; i++)
             {
                 var next = random.Next(0, 1024 * 1024 * 512);
                 byte* basePtr = p + next;
                 using (var ums = new UnmanagedMemoryStream(basePtr, 12 * 1024, 12 * 1024, FileAccess.ReadWrite))
                 {
                    for (int j = 0; j < 100; j++)
                     {
                        ums.Write(key, 0, 16);
                         ums.Write(buffer, 0, 100);
                     }
                 }
                 FlushViewOfFile(basePtr, new IntPtr(12 * 1024));
                 fs.Flush(true);
             }
             Console.WriteLine("{0:#,#} ms for {1:#,#} ops / sec", sp.ElapsedMilliseconds, (1000 * 1000) / sp.Elapsed.TotalSeconds);
         }
     }
 }

This is about as efficient a way you can get for writing to the disk using memory mapped files if you need to do that using memory mapped files in a transactional manner. And that is the absolute best case scenario, pretty much. Where we know exactly what we wrote and where we wrote it, and we always write a single entry, of a fixed size, etc. In Voron’s case, we might write to multiple pages at the same transaction (in fact, we are pretty much guaranteed to do just that).

This means that I need to think about other ways of doing that.



Take the Chaos Out of Container Monitoring. View the webcast on-demand!

Topics:

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}