Infinispan Memory Overhead
Join the DZone community and get the full member experience.
Join For FreeHave you ever wondered how much Java heap memory is actually consumed
when data is stored in Infinispan cache? Let's look at some numbers
obtained through real measurement.
The strategy was the following:
1) Start Infinispan server in local mode (only one server instance, eviction disabled)
2)
Keep calling full garbage collection (via JMX or directly via
System.gc() when Infinispan is deployed as a library) until the
difference in consumed memory by the running server gets under 100kB
between two consecutive runs of GC
3) Load the cache with 100MB of data via respective client (or directly store in the cache when Infinispan is deployed as a library)
4) Keep calling the GC until the used memory is stabilised
5) Measure the difference between the final values of consumed memory after the first and second cycle of GC runs
6) Repeat steps 3) 4) 5) four times to get an average value (first iteration ignored)
The
amount of consumed memory was obtained from a verbose GC log (related
JVM options: -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
-Xloggc:/tmp/gc.log)
The test output looks like this: https://gist.github.com/4512589
The
operating system (Ubuntu) as well as JVM (Oracle JDK 1.6) were 64-bit.
Infinispan 5.2.0.Beta6. Keys were kept intentionally small (10-char-long
String). Values are byte arrays. The target entry size is a sum of key
size and value size.
Memory overhead of Infinispan accessed through clients
HotRod client
So how much additional memory is consumed on top of each entry?
MemCached client (text protocol, SpyMemcached client)
REST client (Content-Type: application/octet-stream)
The memory overhead for individual entries seems to be more or less constant
across different cache entry sizes.
Memory overhead of Infinispan deployed as a library
Infinispan was deployed to JBoss Application Server 7 using Arquillian.
There was almost no difference in overall consumed memory when using storeAsBinary attribute and when not.
As you can see, the overhead per entry is constant across different entry sizes and is ~151 bytes.
Conclusion
The memory overhead is slightly more than 150 bytes per entry when storing data into the cache locally. When accessing the cache via remote clients, the memory overhead is a little bit higher and ranges from ~170 to ~250 bytes, depending on remote client type and cache entry size. If we ignored the statistics for 1MB entries, which could be affected by a small number of entries (100) stored in the cache, the range would have been even narrower.
Published at DZone with permission of Manik Surtani, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
DevOps vs. DevSecOps: The Debate
-
How AI Will Change Agile Project Management
-
5 Common Data Structures and Algorithms Used in Machine Learning
-
Stack in Data Structures
Comments