Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Small Heaps For Small Latency

DZone's Guide to

Small Heaps For Small Latency

Just because a program supports large heaps doesn't mean that you need large heaps to make it work well.

· Performance Zone
Free Resource
“Windows NT addresses 2 Gigabytes of RAM, which is more than any application will ever need.” – Microsoft on the development of Windows NT, 1992

While the above quote is clearly no longer true, one of the misconceptions that I often come across when talking to people about Azul’s Zing JVM is how much memory it needs.

Zing uses the Continuous Concurrent Compacting Collector (C4), which, as the name suggests, can compact heap space whilst application threads are active. Zing is different to all other commercial JVM garbage collectors, which will fall back to a full compacting stop-the-world collection of the old generation when required. The real problem with compacting stop-the-world collections is that the time taken to complete the collection is proportional to the size of the heap, not the amount of live data. The bigger your heap, the longer your application pauses will be. Not so with Zing; with Zing, we can support very large heap sizes (currently up to 2 Tb) without extremely long pauses for GC. The problem is that people seem to make the connection that because Zing supports very large heaps, that’s what you need to make it work well.

Let’s look at a good example of how that is not necessarily the case. 

Our friends over at Hazelcast recently published some benchmark results running their software on a two-node cluster first using a 1Gb and then a 2Gb heap, which are certainly not big heap sizes in today’s world. They used this configuration to compare performance using Oracle’s Hotspot and Azul’s Zing JVM.

The full results are in the article, but it’s interesting to look at the highlights. I’ve written about jHiccup before and how useful it is for measuring the true performance of the platform an application is running on since it reports how a real-world client would perceive response time.

To make the comparison a little easier, I’ve selected the worst performing results of each pair of nodes for both heap sizes and then shown Hotspot side by side with Zing.

Here are the graphs for the 1 Gb heap, with Hotspot results on the left and Zing on the right.

Image title

At first glance, this looks like Hotspot is giving better performance than Zing. However, if you look a little closer, you’ll see that the scale of the graphs is not the same. If we normalize the y-axis to give a fair comparison, the results look a bit different: 

Image title

It's similar for the 2Gb heap (Hotspot results on the left, Zing on the right): 

Image title

Again, to make a valid comparison, we normalize the y-axis:

Image title

Putting the key figures into a table:

Image title

What we see here is that Zing massively outperforms Hotspot in this situation, even with these small heap sizes. 

Although you could argue that the values in the table represent worst case times and response times for only 0.1% of the application’s runtime, these are still very significant when meeting SLAs. Let’s compare average hiccup times:

Image title

The conclusion is pretty obvious. If you’re using Hazelcast and are concerned about the latency effects of the JVM, the Zing JVM can help.

Topics:
performance ,latency ,hiccup times

Published at DZone with permission of Simon Ritter, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}