ActiveObjects: An Easier Java ORM
Join the DZone community and get the full member experience.
Join For FreeJava is absolutely full of Object Relational Mapping frameworks. Obviously Hibernate is the first which comes to mind, but also high on the list would be the likes of iBatis, Beankeeper, etc. In short, there is no shortage of ORMs. However, they all seem to suffer from one of a number of fundamental flaws:
- They're complicated and hard to use
- They're slow and unsuitable for high-load deployment
- They're only compatible with a small handful of databases
Most ORMs actually match the first problem: they're just too darn complicated. Think about it, when's the last time you tried to configure a Hibernate project from scratch? You have to write more XML than code! iBatis makes things a bit easier at times, but you still have to run through a ton of manual configuration and headache. It would be nice if the framework could just automate most of this annoyance away. However, it seems very few frameworks these days put the proper emphasis on "ease of use" and low learning curve.
The Example to Follow
One major exception to this "ease of use drought" is the ActiveRecord framework. For those of you who don't know, ActiveRecord is the ORM upon which Ruby on Rails is based. In fact, without ActiveRecord, Rails really wouldn't be anywhere near as powerful a framework. AR provides an extremely intuitive way to work with databases. For example, let's assume you have the following (fairly simple) schema:

This is a fairly common scenario. We have a simple table which has a one-to-many relationship with another table. Each table has a few values and a primary key (id). Now in Hibernate, we would map to this schema using a pair of Java beans and mountains of mapping XML. With ActiveRecord, all we need to do is write the following Ruby code:
class Person < ActiveRecord::Base belongs_to :company end class Company < ActiveRecord::Base has_many :people end p = Person.find(1) p.age = 27 p.save c = Company.find(1) c.people.each do |person| puts "#{person.first_name} #{person.last_name}" end
As you can see, ActiveRecord puts a very high emphasis on minimalism and simplicity. In fact, thanks to Ruby's dynamic meta-programming underpinnings, we don't even need to write methods to access the data. All we need to do is define the empty classes, add a little mapping DSL goodness, and ActiveRecord figures out the rest (including pluralization of the table names). ActiveRecord will dynamically generate and execute all the SQL required to run the driver code at the bottom of the example. In fact, the only logic missing from this example is the code required to make the actual connection to the database.
ActiveRecord is an example of "convention over configuration". Notice all the assumptions it's making. To start with, it assumes that your table names are pluralized. When we get to the mapping logic, it makes the additional assumption that the people table contains a company_id field with the appropriate foreign key configuration. In short, ActiveRecord is assuming that your needs are equivalent to 90% of developers in this situation, and that you have configured your environment accordingly. As such, it can do some pretty amazing things with little-to-no intervention on your part.
Lessons Learned, Applied to Java
So, ActiveRecord is interesting, how does this help us? Well, for one thing, we can infer the following axioms:
- Conventions can be assumed, and conclusions can be drawn from these conventions
- Simplicity, simplicity, simplicity, simplicity
- Less code is more
- It's ok if the API is mildly atypical and DSL-like, as long as it makes sense and remains internally consistent
So, in theory we could design a brand new ORM, based on these principles, entirely in Java. We may not be able to take advantage of cool Ruby features like method_missing, but Java has its own mechanisms for code reductions which, while not as powerful as Ruby's, still allow for some interesting stuff. The first bit we can focus on is the "Less code is more".
Almost every Java ORM currently available is POJO based. This means that you write Java beans, with fields, accessors and mutators, then pass around instances of these beans. The ORM then reflectively interrogates these beans, extracting values and using mapping logic (usually defined in XML) to persist (or retrieve) these values in the database. This requires a lot of excess boiler-plate code. Think about it, to define a simple bean for our companies table would require something like this:
public class Company { private int id; private String name; private String tickerSymbol; public Company() {} public void getId() { return id; } public void setId(int id) { this.id = id; } public void getName() { return name; } public void setName(String name) { this.name = name; } public void getTickerSymbol() { return tickerSymbol; } public void setTickerSymbol(String tickerSymbol) { this.tickerSymbol = tickerSymbol; } }
While this is nice and readable, it's rather long. Compare this to the three lines required for the ActiveRecord Company class, and we haven't even dealt with relations yet!
At some minimal level, the Java compiler still needs to be satisfied. So even in a theoretically least-possible-code situation, we still need to provide something like the following:
public class Company { public int getID(); public void setID(int id); public String getName(); public void setName(String name); public String getTickerSymbol(); public void setTickerSymbol(String tickerSymbol); }
All the same methods are still there, but we haven't actually implemented anything. In fact, looking closely at this, we seem to have written an interface definition. In fact, by changing "class" to "interface" in the above code sample, we can make it compile. Of course, it doesn't do anything yet, but we can handle that later. The important part is we've established the minimum and most intuitive code required to map the companies table.
Java has a little-known reflective backwater called "dynamic proxies". Basically, this mechanism allows developers to implement interfaces dynamically, without having to actually write every method explicitly. All the developer has to do is provide an implementation of InvocationHandler. This implementation has a single method, invoke, which handles all method invocations called against the dynamic interface instance. It's about the closest thing Java has to Ruby's method_missing. Fortunately, it's good enough for our purposes.
If we define an InvocationHandler which handles the method invocations for our sample interface above, we could conceivably manage the entire entity state within our framework, eliminating the need for the end-developer to write this boilerplate themselves. We could even write a super-interface which defines the getID() and setID(int) methods. Extending this idea even further, we could also define a save() method, to allow API-users to call a method on the entity instance itself to persist values, rather than relying on a persistence manager.
Fleshing Out the Implementation
A few random interfaces does not an ORM make. There are far more things to database access than just getters and setters. But as before, we can write concept code to guide our efforts in implementing the API. Given what we know about the framework so far (based on interfaces and dynamic proxies), we should be able to envision some plausible code:
public interface Company extends Entity { public String getName(); public void setName(String name); public String getTickerSymbol(); public void setTickerSymbol(String tickerSymbol); @OneToMany public Person[] getPeople(); } // assume Person is defined similarly EntityManager manager = new EntityManager("jdbc:mysql://localhost/test", "user", "password"); Company[] companies = manager.find(Company.class); for (Company c : companies) { System.out.println(c.getName()); for (Person p : c.getPeople()) { p.setFirstName("Joe"); p.setAge(8); p.save(); } } Company dzone = manager.find(Company.class, "name = DeveloperZone")[0]; Person me = manager.create(Person.class); me.setFirstName("Daniel"); me.setLastName("Spiewak"); me.setCompany(dzone); me.save();
You'll notice that we have added to the Company interface slightly: we're now handling relations. However, the bulk of the interesting prototyping is being done farther along in the sample. We've now introduced the EntityManager class.
EntityManager is the focal point for all of the interesting activity in our new ORM. This is where the database connection is established, it's where the dynamic proxy is created and wrapped into dynamic instance of the relevant interfaces. Here also, we have the handling of interesting query operations like find(). Unlike ActiveRecord, we can't put find() and friends as part of the Entity super-interface since interfaces in Java cannot contain class methods. However, if we keep the EntityManager#find(…) syntax simple enough, it shouldn't make too much of a difference.
Complex Queries
Of course an ORM is more than just database encapsulation and simple CRUD operations. We need some way to execute more complex queries against the database and receive entities as a result. It turns out that this is a surprisingly tough nut to crack.
Most ORMs (most notably JPA) utilize their own query language to solve the problem. So, to get all of the Person(s) who are older than 18 would be something like this: "from Person where age > 18". As close as this may be to actual SQL, it really isn't. This means that not only do you have to learn a whole new query language, but the framework itself needs to worry about string parsing and query execution, which brings with it a whole new set of issues. In short, it was an interesting, but in practice it turns into a bit of a mess.
Going back once again to our "hero ORM" implementation, ActiveRecord, we can see that it already solves the problem in a somewhat unique manner:
people = Person.find(:where => 'age > 18', :limit => 10)
If you ask me, this is pretty darn intuitive. It's fairly obvious what's going on, we don't need to worry about selecting the right fields for the ORM to construct the Person instance, and on top of that it evaluates very naturally into SQL. ActiveRecord might execute the following SQL in the above case: "SELECT * FROM people WHERE age > 18 LIMIT 10". No obscure query-lang parsing; no new language to learn.
Unfortunately, taking example from this is a bit challenging. ActiveRecord is making use of a Ruby feature which allows the easy creation of Hash instances as part of a method invocation. But even if Ruby didn't have this syntax, named parameters could have been used to achieve the same effect. Java on the other hand, has no such constructs. There is no "simple" way to construct a Map instance on the fly, nor does Java support named parameters. In short, we're a bit stuck.
We could define a find method which takes a hundred parameters, varying its behavior accordingly based on which are non-null. This can be streamlined by overloading the find method to allow for different permutations of parameters. But this is problematic since a) we may have parameter type conflict, breaking the overloading, and b) it's verbose and ugly. In fact, our only recourse is to use a rather odd Java pattern which not-so-neatly gets around the lack of named parameters: method chaining.
If we define a Query class which has several methods used in defining query parameters, each returning this, we should be able to pass these to find and give it all the information it needs in a (comparatively) syntactically simple manner:
Person[] people = manager.find(Person.class, Query.select().where("age > 18").limit(10));
By chaining the method calls on the Query class, we can simulate the use of named parameters and limit the excess code required to run a complex query. In fact, this method even allows us to make use of prepared statement parameterization while still avoiding query parsing:
Person[] drinkers = manager.find(Person.class, Query.select().where("age > ?", 21).limit(10)); Person[] adults = manager.find(Person.class, Query.select().where("age > ?", 18).limit(10)); Person[] drivers = manager.find(Person.class, Query.select().where("age > ?", 16).limit(10));
Here all we're doing is taking advantage of the Java varargs feature and passing extra parameters to the where(String, Object…) method. Most databases will take the above queries, which all resolve to the parameterized "SELECT id FROM people WHERE age > ? LIMIT 10", and compile them all down to a single prepared statement. Whenever this statement is re-invoked (such as in the "adults" and "children" queries), the database will merely pass different parameters to the already compiled statement, streamlining execution. In fact, on most databases, the second two queries will execute anywhere from 2-5 times faster than the first (based on my own micro-benchmarking).
Incidentally, this is an area where we can gain a significant performance advantage over ActiveRecord, which doesn't utilize prepared statements at all. Given the fact that we want our ORM to be as performant as possible, this is good news. :-)
Implementing the Active Record Pattern
Martin Fowler defines the Active Record pattern as:
An object that wraps a row in a database table or view, encapsulates the database access, and adds domain logic on that data.
Even though Fowler wasn't the first to posit this pattern, his definition is considered to be among the best. In fact, it was this definition which directly led to the creation of the ActiveRecord framework within Ruby on Rails. Active Record is considered to be a fairly solid design pattern to use in ORM construction. Granted, it does have issues, but for most scenarios it's pretty useful. Since our goal is to create a useful and simple ORM for 90% of use-cases, Active Record is probably a good design pattern to follow. But at the moment, our informal spec only handles the first part of the definition: database access encapsulation. We need to have some way of adding domain logic to our entities.
This actually poses a bigger problem than it would seem at first glance. In our ORM, entities are represented by interfaces. Thus, they cannot contain any method definitions; hence no domain logic. We could go back to the idea of classes, but Java doesn't support dynamic proxies for classes, only interfaces. Of course, there are always hacks like CGLIB, but I've always considered bytecode modification to be a dubious pursuit at best. So, we can't change our entities to classes, and we can't put domain logic in our interface-entities. In short, we're stuck.
The solution is to factor the domain logic out into a separate class from the entity. This probably makes sense already to most OOD purists since it separates the model logic from the database access itself. Unfortunately, to the Active Record purists, this is probably going to look like heresy. To continue with our tradition of "designing in code", we can add domain logic to the Company entity to auto-generate the ticker symbol from the company name:
@Implementation(CompanyImpl.class) public interface Company extends Entity { // ... } public class CompanyImpl { private Company company; public CompanyImpl(Company company) { this.company = company; } public void setName(String name) { company.setName(name); name = name.trim(); if (name.length() > 4) { company.setTickerSymbol(name.substring(0, 4).toUpperCase()); } else { company.setTickerSymbol(name.toUpperCase()); } } }
As you see, we can use an annotation to specify an "implementation class" which can contain all of the domain logic for our entity. In that class, we will of course need some way of calling back to the original entity to set data and so on. Thus, in the implementation class constructor, we'll accept the corresponding instance and save it as part of the implementor state. We don't really need the implementation classes to extend any particular superclass, since they aren't actually inheriting any functionality.
Once we have the defined implementation instance, we can reflectively interrogate it for matching method signatures and redirect method calls to the corresponding method implementations at will. If there is no matching signature, we'll just execute the database operation as per normal and move on.
In the case of Company, the only method with any implementation is the setName(String) method. This is because we only need to set the ticker symbol when the name changes. In the implementation above, we just set the ticker symbol to the first four characters of the name, converted to upper-case.
Astute observers will see an immediate problem with the provided example: it's recursive. If we're redirecting method calls on the entity with matching method signatures to the defined implementation, then our control flow as defined above will look something like this:
- Consumer code calls Company#setName(String)
- Invocation handler sees setName(String) method signature within CompanyImpl and passes it the method call, skipping the database-peered implementation
- Defined implementation calls company.setName(name)
- Invocation handler sees setName(String) method signature within CompanyImpl and passes it the method call, skipping the database-peered implementation
- ...
Minor issues we hath.
The obvious way to avoid this issue is to make sure that any method invocations coming from the defined implementation are never fed back to the implementation itself. This breaks the infinite recursion and still allows the defined implementations to actually access the entity safely. The trick is in actually detecting the source of the method call.
Fortunately, Java does provide a mechanism - albeit obfuscated and little-known - which allows developers to get access to the call stack dynamically. Actually, it defines two mechanisms for such access, but one would require starting the application in debug mode, which is something we can't mandate of a framework's users. The preferred mechanism is to take advantage of the way Java constructs exceptions.
In creating a generic RuntimeException, Java actually copies out the meta for the current stack trace and sticks it in the exception. This is so that printStackTrace() has something to print. Java even provides an API (Throwable#getStackTrace():StackTraceElement[]) to access this stack trace within the exception dynamically (and without string parsing). We can use this to check for the defined implementation one step up on the stack. If we find that it initiated the method call, we'll skip the re-invocation of the defined implementation and actually execute the method call normally. Thus, any calls to an entity from its defined implementation will skip any implementation logic, avoiding recursion.
It Lives!
The product of all of this brainstorming and prototyping has already been created. What I've been describing in the code samples provided was not just some theoretical, prototype code but actually working code which can be run against the ActiveObjects ORM.
ActiveObjects is designed from the ground up to be as absolutely simple as possible. Just about all that is required of you (the API user) is to create the interface mappings and place the appropriate JDBC driver in the classpath. All of the details of the database access layer are completely handled by ActiveObjects. You can even enable connection pooling simply by placing a pooling library in the classpath (DBCP, C3P0 and Proxool are currently supported). ActiveObjects allows you to work with the database as naturally as you would work with a standard object-oriented class hierarchy, avoiding all of the complications associated with multi-database deployment and SQL.
ActiveObjects can even generate the database schema for you, just based on your entity interface definitions. Alternatively (and a more common scenario), if the schema already exists in some form, but you've added (or removed) a field, ActiveObjects will generate the appropriate DDL to migrate the existing schema to the new version. This means that, as an object-oriented developer, you can think about the objects themselves, and never give consideration to the underlying database or how they are stored.
Currently, ActiveObjects fully supports the following databases:
- MySQL
- Derby
- HSQLDB
- PostgreSQL
- Oracle (migrations don't work due to a bug in the JDBC driver, raw schema generation is available however)
- MS SQL Server (JTDS and Microsoft drivers)
Running your code against any one of these databases is as simple as placing the driver JAR in your classpath and changing the JDBC URI passed to the EntityManager constructor. There's no XML, no properties files to deal with, no configuration cruft whatsoever.
ActiveObjects even supports table name pluralization (if you really want it). All you have to do is specify a different TableNameConverter:
EntityManager manager = new EntityManager("jdbc:mysql://localhost/test", "user", "password"); manager.setTableNameConverter(new PluralizedNameConverter()); // ...
Conclusion
ActiveObjects is a powerful, efficient and highly-intuitive Java ORM. Its syntax is flexible and easy to use, and its lack of extensive configuration requirements means it can cut out dozens of XML files and hundreds of lines of code. By utilizing certain syntax-streamlining features in the Java language, ActiveObjects is able to easily accommodate almost any schema requirements without bulking up your code or introducing complexity.
Opinions expressed by DZone contributors are their own.
Comments