This is the next article in the app profiling series. For a quick overview of what is profiling and how to start using in in Visual Studio, please refer to this article.
Instrumentation profiling allows inspecting the time and generalized CPU load costs for all functions used in a specific application. It is very useful if you want to see what functions take the most time to execute, taken in proportion to the overall application execution time.
NOTE: As a reference application, I am using the simple console project described in the previous article from the app profiling series.
Launching the instrumentation profiling session
To launch an instrumentation profiling session, all you have to do is select the Analyze menu and click on Launch Performance Wizard. Once you did this, a familiar dialog will appear asking you which method would you like to use to track the performance. Obviously, if you follow this article, you should select Instrumentation.
NOTE: When profiling via the Instrumentation method, you are also able to profile Dynamic Link Libraries (DLL files):
Once you start the session, you will see the Output dialog pop up describing the steps that are currently performed:
There is one little new thing you might notice – that Info VSP3049 message, that informs you about the fact that small functions will be excluded from instrumentation. But what does VSInstr consider a small function?
Small functions are considered functions that don’t have much internal activity (don’t perform many operations) and that don’t execute external function calls – therefore, are completely standalone. Visual Studio automatically assumes that such functions don’t have much impact on the application performance, therefore there is no need to include them in the profiling sample.
However, Visual Studio is also very flexible when it comes to letting the developer decide what is best for profiling his application, and for this there is the Performance Explorer.
If you right-click on the application name (the one you are currently profiling), there will be a Properties option, and that is the dialog that lets you customize the profiling process.
To change whether to include small functions or not in the profiling session, go to Instrumentation and you will see the Exclude small functions from instrumentation option. You can adjust it according to your current profiling needs, but I would generally leave it checked.
The first indicator that shows up is the one that describes the CPU load while the application was running:
It is useful to see at what point in time the application reached the critical CPU load point. But this graph is directly connected to the next report indicator, and that is Hot Path. Same as for CPU sampling, it indicates the call path that was the most expensive, but this time it analyzes the function execution time.
You don’t see the function calls separately, but rather the function paths. This means, the even if you had more than one call for the Get method, all these calls combined create the actual indicator that will combine the time for every single call present in the app. Once you click on one of the functions, you will see the detailed view. The same you’ve seen for the CPU sampling process, however the code view won’t help you muchthis time - it is there mostly for reference purposes.
What is worth paying attention to is the detailed graph that visually shows the time proportions for various function calls:
You can switch the performance metric by using the list above the graphic, but for easier review, you can switch to a structured table view.
To do this, change the current view to Caller / Callee:
You will see a table that displays the same information you could see inside the above mentioned graphic, but you can compare the values easier.
Also another set of data that might be interesting to you is Functions With Most Individual Work. It displays the most time consuming functions that are executing the most:
INTERESTING NOTE: If you leave the sample console application running for a while with the Console.Read() ending the Main method, then the time will still count for that specific function. This creates a biased indicator of time-consuming methods, therefore for profiling purposes, I would recommend ending the application as soon as it is done executing the main functions, therefore it would be a good idea to remove the Console.Read() method at the end.
But not in threaded applications like this, since the background threads will be automatically terminated and no valid indicators will be displayed.
Here is the illustration of the example I mentioned:
As you can see, the Read() method is taking the most of the execution time due to the fact that it was running all the time while the user didn't provide any input to terminate the application. Overall, this impacts the indicators that go after this specific call and there is only 10.75% of time allocated for the XmlDocument.Load() method - the application was running for way more time than it should therefore this indicator is far from valid.
Another way to avoid this without the actual removal of the Read() call from your code would be closing the console window by clicking on the Close button - in that case, the method won't be used as a part of the profiling process and won't be listed, however the time frame (used to display the CPU load) will remain the same for the whole period the application was executing, even though the most time-consuming method is not on the list.