I discussed the possibilities of CPU sampling and instrumentation data collection in the previous articles and now it is time to benchmark the application performance indicators that target the memory.
To test this feature out, I created another sample application. This time, it works with files and I tried to create a simulation of a memory-consuming process.
It is a simple C# Console Application with only one method – Main. All the code is executed inside that method and no calls to external libraries are made. Here is what it looks like:
static void Main(string args)
string fileList = Directory.GetFiles(@"D:\Temporary");
foreach (string file in fileList)
Console.WriteLine("Getting bytes for " + file + "...");
Console.WriteLine("Bytes for " + file + ": " + File.ReadAllBytes(file).Length);
What this code does is it gets the file paths (given a specific source folder) and then reads the file contents for each file separately to a byte array. For large files, this process will allocate quite a bit of memory, so that is a perfect way to demonstrate the capacities of built in profiling tools when it comes to memory allocation benchmarking.
As you can see from the code I am showing here, I am referencing a path that points to a folder called Temporary. To test it out, I copied a set of small and not so small files over there (a bunch of large texture files and a movie). And that is pretty much everything that is needed to simulate intensive memory consumption.
Trying it out – getting and analyzing the results
To start the process, go to Analyze > Launch Performance Wizard and select .NET Memory Allocation (Sampling):
I am going to use the default sampling settings, so I am not changing any values in the wizard. Once the process starts, the application will run and might slow down your computer, if very large files are processed.
Once done, let’s take a look at the results. As with the previous methods, there is a graph that shows the CPU load during the execution.
There CPU is pretty loaded for a process like I started and there are some critical spikes as well. But this is not the metric I am looking for.
Functions Allocating Most Memory
The first indicator that interests us for this profiling session. The only thing about it is that it is a bit biased. The ReadAllBytes method of course consumes a lot of memory, but not 100%. So why is the percentage so high then? Due to the fact that it got to read a large file, it consumed so much memory that other method calls are considered insignificant memory-wise. To demonstrate this fact, I am going to delete the large file from the folder and run the profiling process again. Here is what I got after all:
This looks more realistic - although the amounts are really insignificant, those aren’t really equal to zero.
If you click on one of the methods, you will be able to review the data in a more detailed view.
First of all, you are able to see directly in the code snippet the amount of memory allocated for specific calls. The most expensive call is highlighted in red. Now change the view to Inclusive Allocations:
Now the code is highlighted a bit differently, in addition highlighting two additional lines:
Based on the sampling frequency, the call that requested the biggest amount of allocations is highlighted in red, the next one (by the next biggest number of allocations) is highlighted in bright yellow and the third one (also with one of the biggest number of allocations) is highlighted with a lighter shade of yellow. This is an effective way to visually understand what parts of the code might require optimization.
Types With Most Memory Allocated
This indicator represents the data types that used the most of allocated memory for this session. Let’s take a look at what this indicator looks like for the sample application (with the large file present in the testing folder at the time of profiling):
Obviously the byte array is allocating the most of possible memory used by the application, since that’s what I am using to store the file contents. But same as with the previous indicator, the data here doesn’t really correspond to reality. Other data types do allocate specific amounts of memory, and if I run the application while skipping the large file, here is what I get:
Although very small, the amounts of memory are once again different than zero. Sometimes this rounding can be inconvenient (when you need to know exactly the amounts of allocated memory), but for general cases it saves the developer from seeing values like 0.0001 (considering the proportions given by other data types).
However, even given that the values shown could be 0.00, the developer is still able to review the actual values, no matter how small those are. To do this, click on a listed type.
The table that shows up describes in detail the number of exclusive and inclusive allocations (and bytes) for each data type, even for those that weren’t in the initial list. For string (listed as 0.00% in the initial list) we can see that there were 17,802 inclusive (end exclusive) bytes.
NOTE: These values are the same because during a memory sampling, the values are generalized to a global sum of a type inside a single method - in my case, it was Main andI didn't have any external calls or assignments.
To compare, there are 780,715,158 bytes used by the byte array. In this case, the percentage for string would be 0.0022%. The reason for rounding the value to zero is clear here.
Types With Most Instances
If the previous indicator was based on the number of bytes, this one is based on the number of allocations. Let’s take a look at what’s recorded for the current session:
This indicator is standalone and doesn’t vary as to see visible modifications if I change the application running conditions. As an example of a method that generates multiple string instances that influence this indicator I can name Console.WriteLine.
Memory allocation should be carefully tracked because chances are, in larger projects there are parts in code that consume memory the wrong way, for example by storing too many instances of an object. Being able to find these issues as soon as possible will help you avoid issues connected to freezes and crashes later on, when the application will be used in a different environment.