Over a million developers have joined DZone.

Put your fat Collections on a diet!

Microservices! They are everywhere, or at least, the term is. When should you use a microservice architecture? What factors should be considered when making that decision? Do the benefits outweigh the costs? Why is everyone so excited about them, anyway?  Brought to you in partnership with IBM.

Background

Every Java program uses some sort of data structures, be it a trivial array or a Fibonacci Heap or even something more exotic that only Google search knows about. In most cases developers do not write their own implementations of these structures but use either the one provided by Java core APIs or some third-party library, such as Apache Collections or Google Guava. In my 10+ years of Java development not a day passed by without me using some data structure from Java Collection API. These Lists, Sets and Maps are so natural to me that I don't hesitate a second before writing 

Map<Integer, String> = new HashMap<Integer, String>(); 

And everything was fine until recently…

One of the classes inside our Plumbr java agent needs to store a bunch of integers as one of its fields. The semi-formal requirements are as follows:

  • We need a data structure for storing integers.
  • No duplicates
  • Order is unimportant
  • We need to add to this structure new elements
  • We need to look up if some element exists in this structure
  • Number of different elements is limited to a couple of hundreds at most
  • Memory consumption is more important than speed
  • Nevertheless the performance  must be decent, so MemoryMapped files, database etc are out of question.

The natural choice for this requirement is, at least considering my experience so far, java.util.HashSet<Integer>. So, without thinking twice I gave it a try. That was a disaster!

Experiment

Well, in order to illustrate my point, we need some way to measure memory usage of different data structures. For this blog post I used the following procedure:

  1. Write a java class with main method, which holds needed data structure as a local variable
  2. Add infinite cycle to the end of this main method in order for this thread not to die too quickly.
  3. Using Eclipse Memory Analyzer take a memory dump and find out the size of retained heap for the local variable of interest.

As a baseline I used the fact that in Java integer takes 4 bytes. So, for a COUNT number of integers we need 4*COUNT bytes. Then we can calculate the overhead of the given data structure as follows:

Overhead = Structure's retained heap/(4*COUNT)

Please note, that Java distinguishes between primitives and objects and Collection API operates only on objects. It means that total overhead consists of the overhead of given collection data structure and overhead of using Integer objects, not primitives.

Results


 So, having settled that, let us measure how big are java.util.HashSets. In order to do that I used the following code:

Set<Integer> set = new HashSet<Integer>();
int COUNT = 10;
for (int i = 0; i < COUNT; i++) {
  set.add(i);
}


Let us look at the results:
COUNTRetained heapOverhead
1072018
1006 96017.4
100085 42421.36
100000088 774 25622.19

Wow! Just wow! Storing integers in java.util.HashSet takes about 20(!) times as much memory as the information we are storing. This is a HUGE overhead in my opinion. Taking into account our need to work in really constrained memory conditions that was unacceptable. We had to find some other way. We started with reviewing our requirements: what do we really need? It turns out, that plain old java array suits our requirements all right. Changing my code to this:

int COUNT = 10;
Integer[] array = new Integer[COUNT];
for (int i = 0; i < COUNT; i++) {
    array[i] = i;
} 
yielded the following results:
COUNTRetained heapOverhead
103448.6
1003 2248.06
100032 0248.006
100000032 000 0248.000006

So far so good. This reduced the overhead by 2-3 times in comparison to the HashSet. But it is still too large to my taste. So I just replaced Integer[] with int[]:
int COUNT = 10; 
int[] array = new int[COUNT];
for (int i = 0; i < COUNT; i++) {
    array[i] = i;
}

and got:
COUNTRetained heapOverhead
10641.6
1004241.06
10004 0241.006
10000004 000 0241.000006

Now, that's much better. We have a constant overhead of 24 bytes per array. And implementing the needed operations, such as element addition and duplicate elimination is as easy as pie.

Conclusion


The first conclusion is quite obvious: array of primitives is the most compact data structure. That is not surprising :) What was really a big surprise for me, was the magnitude of overhead that Java Collection API has. I hope to keep that in mind next time I choose the structure for my data.

Of course, Java Collection API will not become deprecated as a result of this post :) And I certainly will use it again and again, as it provides a very easy-to-use API. But in those rare cases when every byte really matters it is much better to be aware of discovered overhead and design your piece of software accordingly.

Another case where this overhead may be important, is storing large amount of data in e.g. a HashSet. I mean, in case of a couple of millions of elements this overhead, in absolute numbers, grows to a couple of hundreds of megabytes. So, the overhead alone requires a significant increase in your heap size.

Discover how the Watson team is further developing SDKs in Java, Node.js, Python, iOS, and Android to access these services and make programming easy. Brought to you in partnership with IBM.

Topics:

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}