More on MiniDumper: Getting the Right Memory Pages for .NET Analysis

DZone 's Guide to

More on MiniDumper: Getting the Right Memory Pages for .NET Analysis

Minidupmer can help you get the right memory for .NET analysis. Sometimes you have to do the work, but why not let ClrMD do the work for you?

· Performance Zone ·
Free Resource

In my previous post on MiniDumper, I promised to explain in more detail how it figures out which memory ranges are required for .NET heap analysis. This is an interesting story, actually, because I tried a couple of approaches that failed before coming up with the final idea. Basically, I knew that a dump with full memory contains way more information than is necessary for .NET dump analysis. Even if you need the entire .NET heap available, you typically don’t need a bunch of other memory ranges: executable code, Win32 heaps, unused regions of thread stacks, and so much more.

My first attempt was to filter the memory ranges that are put in the dump by using a VM range callback with theMiniDumpWriteDump method. With this approach, every memory range that’s being written into the dump is first passed to the callback, which gets a chance to filter it out. So I figured I can filter out executable code (modules), Win32 heaps, anything really that’s not part of the .NET heap—and I could get the extent of the .NET heap and other necessary CLR heap regions by using the ClrMD API, notablyClrRuntime.EnumerateMemoryRegions.

Unfortunately, this attempt failed: the generated dumps couldn’t be analysed by SOS, and were practically useless. I tried to debug through the memory read errors emitted by SOS (and ClrMD), and discovered that the .NET debugging interfaces need access to a bunch of memory regions that aren’t reported byEnumerateMemoryRegions. Specifically, there are some regions in every .NET module that need to be included—so I tried including all the module memory, but then discovered there are also some global heap data structures that are not on the CLR heap required for dump analysis to work. Reverse engineering exactly which structures are required is incredibly complex, so I decided against it.

But then I came up with a different idea. Why not let ClrMD do the work for me and figure out which memory regions are required for successful dump analysis? When ClrMD enumerates call stacks, heap objects, blocking objects, modules, and all other kinds of debugging goodies, it uses a data reader interface to read data from the target process’ memory (or a dump file). The IDataReader interface is pretty simple, too:

public interface IDataReader
  // ... a bunch of methods omitted for clarity
  bool ReadMemory(ulong address, byte[] buffer,
                  int bytesRequested, out int bytesRead);
  bool ReadMemory(ulong address, IntPtr buffer,
                  int bytesRequested, out int bytesRead);

By injecting a proxy between the ClrMD API and the standard live memory data reader, I was able to monitor all memory regions required by the ClrMD API. These are the memory ranges required when creating the dump file, too!

The end result is the following. Prior to letting MiniDumpWriteDump write out the dump to a file, I suspend the target process and run a bunch of ClrMD methods to make sure all the required memory regions are touched and recorded by my data reader proxy. Then, I use MiniDumpWriteDump‘s memory callbacks to make sure these memory regions are included in the dump; everything else is discarded.

The resulting dump file is considerably smaller than a dump with full memory, but it still contains the full .NET heap and makes leak analysis, type queries, and object inspection entirely possible.

.net, clrmd, debugging, memory, minidump, performance, ram

Published at DZone with permission of Sasha Goldshtein , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}