DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • How the Go Runtime Preempts Goroutines for Efficient Concurrency
  • Real-World Garbage Collection Scenarios and Solutions
  • Using Heap Dumps to Find Memory Leaks
  • Different Garbage Collectors in Java: Exploring the Options

Trending

  • The Modern Data Stack Is Overrated — Here’s What Works
  • Building Scalable and Resilient Data Pipelines With Apache Airflow
  • Rethinking Recruitment: A Journey Through Hiring Practices
  • Doris: Unifying SQL Dialects for a Seamless Data Query Ecosystem
  1. DZone
  2. Coding
  3. Languages
  4. What is Allocation Rate?

What is Allocation Rate?

This post will try to remove the magic from the phrases "unsustainable allocation rate" and "keeping your allocation rates low."

By 
Nikita Salnikov-Tarnovski user avatar
Nikita Salnikov-Tarnovski
·
Sep. 21, 15 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
5.5K Views

Join the DZone community and get the full member experience.

Join For Free

allocation rate gc overheadPhrases such as “unsustainable allocation rate” and “you need to keep your allocation rates low” seem to belong to the vocabulary of Java Champions alone. Complex, scary and surrounded by a magic aura.

As it often happens, the magic disappears with a puff of smoke when you look at the concepts more closely. This post will try to remove the magic from the mentioned terms.

What is Allocation Rate and Why Should I Care?

Allocation rate is measured in the amount of memory allocated per time unit. Often it is expressed in MB/sec, but you can use PB/per light year if you feel like it. So that is all there is – no magic, just the amount of memory you allocate in your Java code measured over a period of time.

Knowing this fact alone is not too beneficial though. If you can bear with me I will walk you through the practical use of the concept.

Facing high allocation rate can mean trouble for your application’s performance. From the practical standpoint, the impact is surfaced by Garbage Collection becoming a bottleneck. From the hardware standpoint, even commodity hardware can sustain several GB/sec of allocations per core, so in case your rates do not start exceeding 1 GB/sec/core, you can be rather comfortable that your hardware will not actually be the bottleneck.

So, when focusing on the GC, we can start with an analogy that is also true in the real world – if you create a lot of stuff, you tend to face a lot of cleaning afterwards. Knowing that the JVM is built with a garbage collecting mechanism, need to look into how allocation rate changes the frequency or duration of the GC pauses.

Measuring Allocation Rate

Let us start with the measurement of the allocation rate. For this let’s turn on GC logging by specifying -XX:+PrintGCDetails -XX:+PrintGCTimeStamps flags for the JVM. The JVM now starts logging the GC pauses in a similar way to the following snippet:

0.291: [GC (Allocation Failure) [PSYoungGen: 33280K->5088K(38400K)] 33280K->24360K(125952K), 0.0365286 secs] [Times: user=0.11 sys=0.02, real=0.04 secs] 
0.446: [GC (Allocation Failure) [PSYoungGen: 38368K->5120K(71680K)] 57640K->46240K(159232K), 0.0456796 secs] [Times: user=0.15 sys=0.02, real=0.04 secs] 
0.829: [GC (Allocation Failure) [PSYoungGen: 71680K->5120K(71680K)] 112800K->81912K(159232K), 0.0861795 secs] [Times: user=0.23 sys=0.03, real=0.09 secs] 

From the GC log above, we can calculate the allocation rate as the difference between the Young Generation size after the last collection completed and before the next one started. Using the example above, we can for example extract the following information:

  • At 291ms after the JVM was launched 33,280K of objects were created. First minor GC event cleaned the Young generation, after which there was 5,088K of objects in the Young generation left.
  • At 446ms after launch, the Young gen occupancy had grown to 38,368K triggering next GC, which managed to reduce the Young Gen occupancy to 5,120K.
  • At 829ms after the launch, Young gen size was 71,680K and the GC reduced it again to5,120K.

This data can then be expressed in the following table calculating the allocation rate as deltas of the Young occupancy:

EventTimeYoung beforeYoung afterAllocated duringAllocation rate 1st GC 291ms 33,280KB 5,088KB 33,280KB 114MB/sec 2nd GC 446ms 38,368KB 5,120KB 33,280KB 215MB/sec 3rd GC 829ms 71,680KB 5,120KB 66,560KB 174MB/sec Total 829ms N/A N/A 133,120KB 161MB/sec

Having this information allows us to say that this particular piece of software had the allocation rate of 161 MB/sec during the period of measurement.

Analyzing the Impact

Now, being equipped with this information we can understand how the changes in allocation rate affect application throughput by increasing or reducing the frequency of GC pauses. First and foremost, you should notice that only Minor GC pauses cleaning the Young Generation are affected.  The frequency nor duration of the GC pauses cleaning the Old Generation is not directly impacted by allocation rate, but instead by promotion rate, a term that we will cover in our next post.

Knowing that we can focus only on Minor GC pauses, we should next look into the different memory pools inside the Young Generation. As the allocation takes place in Eden, we can immediately look into how sizing the Eden can impact the allocation rate. So we can have a hypothesis that increasing the size of Eden will reduce the frequency of minor GC pauses and thus allow the application to sustain faster allocation rates.

And indeed, when running the same example with different Eden sizes using -XX:NewSize -XX:MaxNewSize & -XX:SurvivorRatio parameters, we can see two-fold difference in allocation rates

  • Running the above example with 100M of Eden, reduces the allocation rate to below 100MB/sec
  • Increasing Eden size to 1GB, increases the allocation rate to just below 200MB/sec.

If you are still wondering why this can be true – if you stop your application threads for GC less frequently you can do more useful work. More useful work also happens to create more objects, thus supporting the increased allocation rate.

Now, before you jump to conclusion that “bigger Eden is better,” you should notice that allocation rate might and probably does not directly correlate to the actual throughput of your application. It is a technical measurement, contributing to throughput. Allocation rate can and will have an impact on how frequently your Minor GC pauses stop application threads, but to see the overall impact, you need to take into account also Major GC pauses and measure throughput not in MB/sec but in business operations your application provides.

garbage collection

Published at DZone with permission of Nikita Salnikov-Tarnovski, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • How the Go Runtime Preempts Goroutines for Efficient Concurrency
  • Real-World Garbage Collection Scenarios and Solutions
  • Using Heap Dumps to Find Memory Leaks
  • Different Garbage Collectors in Java: Exploring the Options

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: