Garbage Collection in Java (Part 4)
Join the DZone community and get the full member experience.Join For Free
g1: garbage first
the g1 collector is the latest collector to be implemented in the hotspot jvm. its been a supported collector ever since java 7 update 4. its also been publicly stated by the oracle gc team that their hope for low pause gc is a fully realised g1. this post follows on from my previous garbage collection blog posts:
the problem: large heaps mean large pause times
the concurrent mark and sweep (cms) collector is the currently recommended low pause collector, but unfortunately its pause times scale with the amount of live objects in its tenured region. this means that whilst its relatively easy to get short gc pauses with smaller heaps, but once you start using heaps in the 10s or 100s of gigabytes the times start to ramp up.
cms also doesn't "defrag" its heap so at some point in time you'll get a concurrent mode failure (cmf), triggering a full gc. once you get into this full gc scenario you can expect a pause in the timeframe of roughly 1 second per gigabyte of live objects. with cms your 100gb heap can be a 1.5 minute gc pause ticking time bomb waiting to happen ...
good gc tuning can address this problem, but sometimes it just pushes the problem down the road. a concurrent mode failure and therefore a full gc is inevitable on a long enough timeline unless you're in the tiny niche of people who deliberately avoid filling their tenured space.
g1 heap layout
the g1 collector tries to separate the pause time of an individual collection from the overall size of the heap by splitting up the heap into different regions. each region is of a fixed size, between 1mb and 32mb, and the jvm aims to create about 2000 regions in total.
you may recall from previous articles that the other collectors split the heap up into eden, survior space and tenured memory pools. g1 retains the same categories of pools but instead of these being contiguous blocks of memory, each region is logically categorised into one of these pools.
there is also another type of region - the humongous region. these are designed to store objects which are bigger in size than most objects - for example a very long array. any object which is bigger than 50% of the size of a region is stored in a humongous region. they work by taking multiple normal regions which are contiguously located in memory and treating them as a single logical region.
of course there's little point in splitting the heap into regions if you are going to have to scan the entire heap to figure out which objects are marked live. the first step in achieving this is breaking down regions into 512 byte segments called cards. each card has a 1 byte entry in the card marking table.
each region has an associated remembered set or rset - which is the set of cards that have been written to. a card is in the remembered set if an object from another region stored within the card points to an object within this region .
whenever the mutator writes to an object reference, a write barrier is used to update the remembered set. under the hood the remembered set is split up into different collections so that different threads can operate without contention, but conceptually all collections are part of the same remembered set.
in order to identify which heap objects are live g1 performs a mostly concurrent mark of live objects.
- marking phase the goal of the marking phase is to figure out which objects within the heap are live. in order to store which objects are live, g1 uses a marking bitmap - which stores a single bit for every 64bits on the heap. all objects are traced from their roots, marking areas with live objects in the marking bitmap. this is mostly concurrent, but there is an initial marking pause similar to cms where the application is paused and the first level of children from the root objects are traced. after this completes the mutator threads restart. g1 needs to keep an up to date understanding of what is live in the heap, since the heap isn't being cleaned up in the same pause as the marking phase.
- remarking phase the goal of the remarking phase is to bring the information from the marking phase about live objects up to date. the first thing to do is decide when to remark. its triggered by a percentage of the heap being full. this is calculated by taking information from the marking phase and the number of allocations since then and which tells g1 whether its over the required percentage. g1 uses the aforementioned write barrier to take note of changes to the heap and store them in series of change buffers . the objects in the change buffer are marked in the marking bitmap concurrently. when the fill percentage is reached the mutator threads are paused again and the change buffers are processed, marking objects in change buffers live.
- cleanup phase at this point g1 knows which objects are live. since g1 focusses on regions which have the most free space available, its next step is to work out the free space in a given region by counting the live objects. this is calculated from the marking bitmap, and regions are sorted according to which regions are most likely to be beneficial to collect. regions which are to be collected are stored in what's know as a collection set or cset .
similar to the approach taken by the hemispheric young generation in the parallel gc and cms collectors dead objects aren't collected. instead live objects get evacuated from a region and the entire region is then considered free.
g1 is intelligent about how it reclaims living objects - it doesn't try to reclaim all living objects in a given cycle. it targets regions which are likely to reclaim as much space as possible and only evacuates those. it works out its target regions by calculating the proportion of live objects within a region and picking region with the lowest proportion of live objects.
objects are evacuated into free regions, from multiple other regions. this means that g1 compacts the data when performing gc. this is operated on in parallel by multiple threads. the traditional 'parallel gc' does this but cms doesn't.
similar to cms and parallel gc there is a concept of tenuring. that is to say that young objects become 'old' if they survive enough collections. this number is called the tenuring threshold. if a young generational region survives the tenuring threshold and retains enough live objects to avoid being evacuated then the region is promoted. first to be a survivor and eventually a tenured region. it is never evacuated.
unfortunately, g1 can still encounter a scenario similar to a concurrent mode failure in which it falls back to a stop the world full gc. this is called an evacuation failure and happens when there aren't any regions which are free. no free regions means no where to evacuate objects.
theoretically evacuation failures are less likely to happen in g1 than concurrent mode failures are in cms. this is because g1 compacts its regions on the fly rather than just waiting for a failure for compaction to occur.
despite the compaction and efforts at low pauses g1 isn't a guaranteed win and any attempt to adopt it should be accompanied by objective and measurable performance targets and gc log analysis. the methodology required is out of the scope of this blog post, but hopefully i will cover it in a future post.
algorithmically there are overheads that g1 encounters that other hotspot collectors don't. notably the cost of maintaining remembered sets. parallel gc is still the recommended throughput collector, and in many circumstances cms copes better than g1.
its too early to tell if g1 will be a big win over the cms collector, but in some situations its already providing benefits for developers who use it. over time we'll see if the performance limitations of g1 are really g1 limits or whether the development team just needs more engineering effort to solve the problems that are there.
Published at DZone with permission of Richard Warburton, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.