New JPA Performance Benchmark
This article presents a new open source database performance benchmark for JPA that covers Hibernate, EclipseLink, DataNucleus, OpenJPA and ObjectDB.
My name is Ilan Kirsh and I am the founder of ObjectDB Software, a provider of ObjectDB, a high performance commercial Java object database that natively supports both JPA and JDO.
I have created a comprehensive new benchmark for measuring JPA provider performance. As part of this, I collected results for many different combinations of JPA implementations (Hibernate, Eclipselink, DataNucleus, OpenJPA) and leading RDBMS products (MySQL, PostgreSQL, Derby, H2, HSQLDB, SQLite), as well as ObjectDB.
You can see the results at jpab.org
Try out the site – you can compare products and rank them according to the results.
Why a Benchmark Focused on JPA?
The Java Persistence API (JPA) is the new standard for working with databases in Java. Unlike older persistence solutions (JDBC,EJB CMP), JPA supports direct persistence and retrieval of POJO (Plain Old Java Objects), which increases development productivity.
There are many open source benchmark programs for testing database performance. However, most of the benchmark programs for Java are based on JDBC and cannot be used for comparison of Java Persistence API (JPA) providers, such as Hibernate, EclipseLink and ObjectDB.
The few JPA benchmarks that have been published so far for JPA were very limited in scope. They were restricted to a single RDBMS and to a few very specific database operations.
A good benchmark for JPA must support comparing effectively many combinations of JPA providers and databases, because a JPA provider might work well with one DBMS but may become very slow with another DBMS (as indeed shown in the benchmark results).
Why do I care? I wanted a transparent, repeatable set of tests to compare ObjectDB with the other products out there. I also wanted something that people could directly critique and try for themselves.
In addition to ObjectDB, 4 ORM implementations have been tested (Hibernate, EclipseLink, OpenJPA, DataNucleus) with 6 different RDBMS (MySQL, PostgreSQL, Derby, H2, HSQLDB, SQLite), some of them in both client-server and embedded modes.
The results can be used in different ways. First, you can browse the results and use filters to focus on a specific test.
For example, the following chart presents results of a specific test - persisting simple entity objects in batches of 5000 per transaction:
Second, you can view result summary for:
- every tested JPA provider, e.g. Hibernate.
- every tested Database, e.g. MySQL.
- every tested combination of JPA provider and Database, e.g. EclipseLink and H2.
For example, the following chart presents the results of SQLite in the benchmark (with Hibernate and EclipseLink):
Finally, you can produce a head to head comparison of any two tested combinations, e.g. comparison of Hibernate with Derby in server mode vs embedded mode:
The Benchmark Program
Let me explain a little about the main criteria I used in creating the benchmark. First, it should work with any JPA 2 implementation. To avoid influence of specific code for specific implementations on the results - exactly the same code (standard JPA) must be used for all the tested implementations.
Second, the benchmark should work with any database with JPA support and it should be able to generate separate results for every combination of tested JPA/DBMS, e.g. Hibernate/MySQL, Hibernate/PostgreSQL, EclipseLink/MySQL, etc.
Third, the tests should be easy to run and easy to configure and it also has to be easy to add new tests.
In addition, the benchmark has to be open source. On the jpab.org site, you can download the full sources for all the tests, try them yourself, view and critique the code.
One of the clear outcomes of this testing is that ObjectDB outperforms the other JPA/DBMS combinations often by an order of magnitude. These results initially startled me.
"Haha", I hear you mutter. "He would say that". It’s a reasonable response -- after all, I created the tests.
The key point is that the tests are open, transparent and the source is available. Please run the tests yourself and verify the results. There’s no magic there – I want to get as much feedback and make it as fair as possible.
When I started selling ObjectDB in 2003, I didn't attach too much importance to performance. The original purpose of developing ObjectDB was merely to provide an easy to use database that can store objects (and graphs of objects) with no need for object relational mapping.
However, over the years feedback from customers told me that performance is actually the main source of attraction of ObjectDB. Many ObjectDB users have conducted their own benchmarks. Many of them have sent me the results. It is weird but actually I learned about a performance gap between ObjectDB and other products from users. So, why is ObjectDB faster? I put it down to the fact that there are a lot less layers involved. With a JPA ORM and an RDBMS, the provider must convert the objects, process the metadata etc, and then write out the SQL. With ObjectDB, the process is far more direct. What I didn’t realize is that this directness seems to improve performance by 10x.
So, frankly, one of my main motivations for developing a transparent JPA benchmark was to present the performance capabilities of ObjectDB. My aim is to be completely transparent, and in that spirit I am making all the results and test code available. The FAQ explains the limitations of the benchmark and in which situations the results are relevant and in which they are not. In addition, performance is not critical in every application, so in many cases using a slower JPA/DBMS combination is not an issue.
Notice that the benchmark should be useful also for anyone that is only interested in RDBMS and ORM based JPA implementations and not in an object database. In that case it is even very objective since I don't have a preference for any particular ORM or RDBMS product.
Call for Help in Tuning
All the results that are currently presented on the benchmark website reflect using default configuration for all the tested products (except one exception that is explained in the FAQ). Because the tests are relatively simple, the default configuration is expected to work well. It also seems fair to use the default configuration for all the products.
However, I would like to run a second phase, in which all the products are run with optimized tuning for this benchmark. We have had some preliminary success for OpenJPA - by adjusting parameters we were able to get results much closer to the high end of the ORM range and much higher test pass ratio (initial results are presented on http://temp.jpab.org). This next phase will require expert help from the community. If you can help in tuning one or more of the tested JPA/DBMS combinations please send an email to feedback at jpab dot org.