In our quest to solve every memory leak in the Java world we get in touch with many teams, who struggle with performance issues of their applications. Every developer tries to solve them differently. Some begin with vmstat or top, others with a CPU profiler or a DB load monitoring tool. Many of them turn to different APM products, like CA APM, AppDynamics, dynaTrace, etc.
However, if you properly ask the application, it will normally give you all the information you need for performance tuning. And you can get the info without relying on indirect metrics of the operating system or the JVM.
In our opinion the simplest and the most useful performance metric of any application of considerable size, is the application usage data that is generated by the application itself. Let's assume that for every class and every method of your application you have these 2 pieces of information:
- The total number of runs, and
- The cumulative (total) run time of all runs of the method
If you sort this data by the total run time, you get something like this (of course, in a much longer table):
|Method name||Runs||Total time|
The top 10 in that table will give you the methods that contribute the most into your application run time. By Amdahl’s law you should devote your time to optimizing these methods, not some arbitrary algorithm that you suspect to be a bottleneck.
The example above displays 2 methods, which require attention. MethodB, although very fast in itself, is called extremely often. One should look into ways to minimize this. There is almost no point in optimizing the method itself though, as it is fast enough already. MethodA, on the other hand, can be optimized in both ways: by minimizing the number of calls and by speeding up the method itself. MethodC, in relation to methods A and B, is not worth touching at all.
You should take into account 2 things:
- When method1 calls method2, which calls method3, which in turn calls method4, then the cumulative total time for method1 in the table above will contain cumulative times for method2, method3 and method4. So the winner of the chart is not always the method that calls for optimization. But, as a rule of thumb, the top10 will almost always be the right place to look.
- Optimize one thing at a time, and after every change re-run your application and gather usage data again. Changing runtime characteristics of one method can change the whole TOP list.
Now, the last big question: how should you get the input data? We really do NOT recommend using CPU profilers for that purpose. These introduce way too much overhead to be used in production environment.
Our preferable way to gather such data is to augment every method in the application with a stop-watch. Just measure the method call time and return time and log the difference alongside with the method name. Such monitoring code can be inserted into the application using different ways: bytecode manipulation, AOP frameworks like Spring AOP or AspectJ, or a manual dynamic proxy.
After adding the stop-watches, let your application run for a couple of days or weeks. You will get usage data that adequately represents the usage patterns of your customers.