How to Fail With Drools or Any Other Tool/Framework/Library
What I like most at conferences are reports of someone’s failure to do or implement something for they’re the best sources of learning. And How to Fail with Drools (in Norwegian) by C. Dannevig of Know IT at JavaZone 2011 is one of them. I’d like to summarize what they learned and extend it for introduction of a tool, framework, or library in general based on my own painful experiences.
They decided to switch to the Drools rule management system (a.k.a. JBoss Rules) v.4 from their homegrown rules implementation to centralize all the rules code at one place, to get something simpler and easier to understand, and to improve the time to market by not requiring a redeploy when a rule is added. However Drools turned out to be more of a burden than help for the following reasons:
- Too little time and resources were provided for learning Drools, which has a rather steep learning curve due to being based on declarative programming and rules matching (some background), which is quite alien to the normal imperative/OO programmers.
- Drools’ poor support for development and operations – IDE only for Eclipse, difficult debugging, no stacktrace upon failure
- Their domain model was not well aligned with Drools and required lot of effort to make it usable by the rules
- The users were used to and satisfied with the current system and wanted to keep the parts facing them such as the rules management UI instead of Drools’ own UI thus decreasing the value of the software (while increasing the overall complexity, we could add)
At the end they’ve removed Drools and refactored their code to get all rules to one place, using only plain old Java – which works pretty well for them.
Lessons Learned from Introducing Tools, Frameworks, and Libraries
While the Know IT team encountered some issues specific to Drools, their experience has a lot in common with many other cases when a tool, a framework, or a library are introduced to solve some tasks and problems but turn out to be more of a problem themselves. What can we learn from these failures to deliver the expected benefits for the expected cost? (Actually such initiatives will be often labeled as a success even though the benefits are smaller and cost (often considerably) larger than planned.)
Always think twice – or three or four times – before introducing a [heavyweight] tool or framework. Especially if it requires a new and radically different way of thinking or working. Couldn’t you solve it in a simpler way with plain old Java/Groovy/WhateverYouGot? Using an out of the box solution sounds very *easy* – especially at sales meetings – but it is in fact usually pretty *complex*. And as Rich Hickey recently so well explained in his talk, we should strive to minimize complexity instead of prioritizing the relative and misleading easiness (in the sense of “easy to approach, to understand, to use”). I’m certain that many of us have experienced how an “I’ll do it all for you, be happy and relax” tool turns into a major obstacle and source of pain – at least I have experienced that with Websphere ESB 6.0. (It required heavy tooling that only few mastered, was in reality version 1.0 and a lot of the promised functionality had to be implemented manually anyway etc.)
We should never forget that introducing a new library, framework or tool has its cost, which we usually tend to underestimate. The cost has multiple dimensions:
- Complexity – complexity is the single worst thing in IT projects, are you sure that increasing it will pay off? Complexity of infrastructure, of internal structure, … .
- Competence – learning curve (which proved to be pretty high for Drools), how many people know it and availability of experts that can help in the case of troubles
- Development – does the tool somehow hinder development, testing or debugging, f.ex. by making it slower, more difficult, or by requiring special tooling (especially if it isn’t available)? (Think of J2EE x Spring)
- Operations – what’s the impact on observability of the application in production (high for Drools if it doesn’t provide stack traces for failures), on troubleshooting, performance, deployment process, …?
- Defects and limitations – every tool has them, even though seemingly mature (they had already version 4 of Drools); you usually run into limitations quite late, it’s difficult if not impossible to discover them up front – and it’s hard to estimate how flexible the authors have made it (it’s especially bad if the solution is closed-source)
- Longevity – will the tool be around in 1, 5, 10 years? What about backwards compatibility, support for migration to higher versions? (The company I worked for decided to stop support for Websphere ESB in its infrastructure after one year and we had to migrate away from it – what wasted resources!)
- Dependencies – what dependencies does it have, don’t they conflict with something else in the application or its environment? How it will be in 10 years?
And I’m sure I missed some dimensions. So be aware that the actual cost of using something is likely few times higher than your initial estimate.
Another lesson is that support for development is a key characteristics of any tool, framework, library. Any slowdown which it introduces must be multiplied at least by a 106 because all those slowdowns spread over the team and lifetime of the project will add up a lot. I experienced that too many times – a framework that required redeploy after every other changes, an application which required us to manually go through a wizard to the page we wanted to test, slow execution of tests by an IDE.
The last thing to bear in mind is that you should be aware whether a tool and the design behind it is well aligned with your business domain and processes (including the development process itself). If there are mismatches, you will need to pay for them – just think about OOP versus RDBMS (don’t you know somebody who starts to shudder upon hearing “ORM”?).
Be aware that everything has its cost and make sure to account for it and beware our tendency to be overly optimistic when estimating both benefits and cost (perhaps hire a seasoned pessimist or appoint a devil’s advocate). Always consider first using the tools you already have, however boring that might sound. I don’t mean that we should never introduce new stuff – just want to make you more cautious about it. I’ve recently followed a few discussions on how “enterprise” applications get unnecessarily and to their own harm bloated with libraries and frameworks (e.g. Java isn’t fun anymore + comments, @jhannes on SOA) and I agree with them that we should be more careful and try to keep things simple. The tool cost dimensions above may hopefully help you to expose the less obvious constituents of the cost of new tools.
PS: I hope the IBM architect who once denied me to use XStream thus forcing me to parse a (simple) XML via the basic APIs is amused by this post. I was too young and unexperienced then. (Not implying I’m not that anymore .)