What garbage collector are you using?
Join the DZone community and get the full member experience.Join For Free
our research labs are humming at full speed. with the recent injection of capital , we can only warrant that the pace at which we keep innovating will only increase. part of the research we conduct is related to gc optimizations. while dealing with the problems in this interesting domain, we thought to share some insights to gc algorithm usage.
for this we conducted a study on how often a particular gc algorithm is being used. the results are somewhat surprising. let me start with the background of the data – we had access to data from 84,936 sessions representing 2670 different environments to use in the study. 13% of the environments had explicitly specified a gc algorithm. the rest left the decision to the jvm. so out of the 11,062 sessions with explicit gc algorithm, we were able to distinguish six different gc algorithms:
before understanding the details about the gc algorithm usage, we should stop for a minute and try to understand why 87% of the runs are missing from the pie chart above. in our opinion, this is a symptom of two different underlying reasons
- first and good reason is that – the jvm has gotten so good at picking reasonable defaults that the developers just do not need to dig under the hood anymore. if your application throughput and latency are sufficient then indeed – why bother?
- the second likely cause for the lack of the gc algorithm indicates that the application performance has not been a priority to the team. as our case study from last year demonstrated – significant improvements in throughput and latency can be just one configuration tweak away.
so, we have close to 83,000 jvms running with the default gc selection. but what is the default? the answer is both simple and complex at the same time. if you are considered to be running on a client jvm, the default applied by the jvm would be serial gc (-xx:+useserialgc). on server-class machines the default would be parallel gc (-xx:+useparallelgc). whether you are running on a server or client class machine can be determined based on the following decision table:
|32-bit sparc||2+ cores & > 2gb ram||solaris||server|
|32-bit sparc||1 core or < 2gb ram||solaris||client|
|i568||2+ cores & > 2gb ram||linux or solaris||server|
|i568||1 core or < 2gb ram||linux or solaris||client|
but lets go back to the 13% who have explicitly specified the gc algorithm in the configuration. it starts with the good old serial mode, which, by no surprise has so small user base that it is barely visible from the diagram above. indeed, just 31 environments were sure this is the best gc algorithm and had specified this explicitly. considering that most of the platforms today are running on multi-core machines you should not be surprised – when you have several cores at your disposal, the switch from serial mode is almost always recommended.
the rest of the configuration can be divided into three groups – parallel, concurrent and g1. the winner is clear – concurrent mark and sweep algorithms combined represent more than two thirds of the samples. but let us look at the results in more depth.
parallel and parallelold modes are roughly in the same neighbourhood with 1,422 and 1,193 samples. it should not be a surprise – if you have decided that parallel mode is suitable for your young generation then more often than not the same algorithm for the old generation is also performing well. another interesting aspect in parallel modes is that – as seen from the above, the parallel mode is default on the most common platforms, so the lack of explicit specification does not imply it is less popular than the alternatives.
with cms usage, our expectations were different though. namely – that the incremental mode was switched on only on 935 occasions compared to the classical cms with its 6,655 configurations. to remind you – during a concurrent phase the garbage collector thread is using one or more processors. the incremental mode is used to reduce the impact of long concurrent phases by periodically stopping the concurrent phase to yield back the processor to the application. this results in shorter pause times especially on machines with low processor numbers. whether the environments were all having bazillion cores or the people responsible for configuration are just not aware of the incremental mode benefits is currently unclear.
but our biggest surprise was related to the g1 adoption rate. 826 environments were running with g1 as the garbage collection algorithm. based on our experience, independent of whether you are after throughput or latency, the g1 tends to perform worse than cms. maybe the selection of test cases we have had access to has been poor, but at the moment we consider g1 to need more time to be actually delivering on its promises. so if you are a happy g1 user, maybe you can share your experience with us?
hopefully we were able to give you some food for thought. i can only promise that more on the topic is coming, so stay tuned and subscribe to the rss feed!
Published at DZone with permission of Nikita Salnikov-Tarnovski, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.