New Contributors to HugeCollections

DZone 's Guide to

New Contributors to HugeCollections

· Java Zone ·
Free Resource

The objectives of HugeCollections for the first release are fairly ambitious.

  • Scale to massive collections i.e. sizes much larger than 2 billion without significant heap foot print or GC impact. i.e. using heap-less memory
  • Be faster and more efficient that using plain JavaBeans with an ArrayList or Map and a vector or unordered_map in C++.
  • Support durability (transparently saved and loaded from disk)
  • Support thread safety (with no overhead if not required) and using multiple threads implicitly. i.e. large operations are automatically distributed.
  • Be faster and more efficient than using a database. Transactions will NOT be in this release. 
  • Support in one application what might have to be distributed otherwise.
  • Dynamic code generation as required (no need to pre-generate code in the build)
A prototype has been built which shows these objectives are possible, however to turn this library in to a usable release, will take some help.


Peter Lawrey - I have been working with Java for 12 years and on high performance systems for 15 years. I have worked at Investment Banks, a prop trading firm and Sun Microsystems.

Two new contributors

Rob Austin currently contracting for a large investment bank, working on a low latency pricing and trading platform. Over 10 years of Java experience.

Costantino Cerbo is a certified Java developer (SCJP and SCBCD) and software consultant with more than 6 years experience. He has worked for one of the Italy's largest banks. He is an Italian native speaker, fluent in German and strong in English (TOEFL: 607 points).

More contributors welcome

I am looking for additional contributors to....
- document the high level approach used and its advantages.
- proper documentation and pier review of the design.
- code review of the hand coded collections for list and hash map (These are templates for the auto-generated classes)- tool to assist the conversion of the template into the code generation.
- Test cases for correct functionality.
- Performance and scalability tests.
- Comparison tests for performance and scalability.
- a comparison with the features of similar products like ehCache BigMemory, javolution, trove4j (also find other products worth comparing) and C++.

- Documentation and blog of the comparison.
- Examine thread safety support and tests.
- Auto multi-threading for filter() and visit() methods.
- Examine integration and examples of use in JVM languages like Scala, Groovy, Jython, JRuby and see what support can be given. i.e. are there simple things which can be done to make it simpler/more natural to use.
- Produce a JTable GUI demo with one billion rows.

- Add queue/dispatcher support.
- Add sorted index (a la TreeMap or map in C++) support.
- Add non-unique indexes.
- Simple remote support. RMI/RPC
- Distributed copies of data and partitioning.

Any suggestions welcome.

My email

You can contact me as peter.lawrey on gmail.


From http://vanillajava.blogspot.com/2011/09/new-contributors-to-hugecollections.html


Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}