Staking the Singleton Pattern
Ah, things looked so bright when it seemed like all we had to do was stop using Singletons and apps would pop out of the hopper gleaming, ready-to-go, with nary a defect. Butterflies and puppy dogs would escort their makers everywhere they went, and perfect harmony would at last be achieved, between Customer and Architect…. Instead, we live in a dystopian reality where simple state and scope management is hopelessly screwed up and there really is no solution forthcoming.First off, as is so often the case, the archangels who showed up promising salvation ended up cutting deals with the devil that they later, surprise, didn‘t seem to want to talk about. Exhibit A: Spring. Now, you would think Spring fixed the horrible prevalence of Singletons, right? Well, in fact, for the first 3 or 4 releases, Singleton was the default Spring scope and most Spring projects were stuffed to the gills with Singletons, though it seemed like they weren‘t there, because they were beans. Later, when OpenSessionInView turned into one of the great food fights of the last decade, Spring decided we didn‘t need conversation scope and urged us instead to see anything that wasn‘t either request or singleton as a workflow. [This was when I decided they‘d lost their minds and switched to Seam.]
But here‘s the shocking reality:
- Singleton is not the horrible goblin it‘s been made out to be.
- Badly managed scoping has been the most deleterious defect of the EE platform in the last decade (along with the stupid memory manager that saves you no work, but costs you another seat on your team to figure out why your servers fall over every couple days)
- Oh, and #2 is not fixable.
- The newest vanguards (EE6 w/CDI and TypeSafe and their simpletonian Ruby-like ‘framework‘ Play), leave us to either use Singletons or produce code that is hideously unmaintainable. In Play, for chrissakes, all controllers are static.. ? :O
Of course, as I have mentioned before, iOS has no dependency injection, but I would argue that the code you produce with it is easier to maintain and more testable. We just had a story go through our system where we had to move a component onto a new form that we had working in two other forms. It had a tree-based selector in it. Turned into an insane, nightmarish clambake. You can guess why, but better I just tell: because of course the tree bean was conversation scoped and was creating a new conversation each time rather than joining the one the controller made. What provides this information to the unknowing? A cogent error message? Some simple detector that realizes that this bean is just pissing in the wind, doing no one a lick of good? Nope, none of those things. Rooting around and figuring out little by little how the whole thing is wired up. Which is wherein the rub lies: for the convenience of making properties magically appear inside our classes (actually a stupid, witless pursuit to begin with), we are willing to take on untestable, undocumentable crazy-assed nonsense.
I had reason to reconsider some of this tonight. Here‘s why: I had a component that was getting information from a web page. Then I found out there was actually a web service that would return a bunch more information, meaning I might not have to call the service as often. Problem: also means I would have to hold onto state. Hence, instead of just dialing up the component that does the call and gets me a value, I have to have access to this component from wherever. [This is in iOS, btw.] So I start thinking about making this component a Singleton. What are the problems again? Oh yeah, I am going to scatter [Classname instance] references throughout the code, which is bad, as opposed to just putting @Inject Service service and having the bean server magically ram the Singleton into place each time another component that needs it is made. Is that really better? Is it worth all the indeterminacy? I don‘t think so.
But then I started thinking.. other possibilities are perhaps not completely crazy, e.g. supposing that the component just internalized the notion of being responsible for dehydrating and rehydrating its own state. So it would go something like this:
- Service is instantiated.
- You ask for a value.
- Service doesn‘t have it, calls remote service to get it, but gets more than just what is needed.
- Service writes out the other information to some local file, or puts it into a cache.
- Service hands back what is needed.
- Next time service is invoked, it finds that file, loads it, and the subsequent request is handled from it.
Sound crazy? Sounds a lot less crazy than the huge ball of mess we have now. 15 years and we don‘t really have the ability to do serious integration tests. Did CDI solve that? Of course not. When Rick formed CDI Source, he and Andy and I discussed doing unit tests that could really handle scoping properly and decided it was probably not going to happen.
Long, long ago, John Dvorak, the PC Mag columnist, wrote a column in which he said Windows was doomed because of the Registry: that because it was a huge ball of goo that anyone could reach in and mess with, the OS could never be stable. Totally agreed with that. In part, because it agrees with my fundamental philosophical doctrine. That would be the Doctrine of the Scrambled Egg. Which is the harder problem: landing a man on the moon or unscrambling an egg. Well, CDI and Java testing is way past the Humpty Dumpty had a great fall stage, and btw, it‘s kind of fitting that the other major Java spawn of the last few years, Android, I believe is also Humpty bound.
Does that mean Java is doomed? No. But other solutions are needed. What open sores has given us is a world in which the highest good is whatever means the programmer will do a little bit less typing. It‘s kind of what happened to the food industry for a long time. Wikipedia says there are 7500 apples on earth. Most Americans in the 60s and 70s grew up with one: the Red Delicious (not even a middling one), because it was tough and shippers liked it because they lost less product. Until Alice Waters, there was one lettuce (the most useless one: Iceberg). Then what happened was people rediscovered the lost stuff. I think that‘ll happen in development too.
Folks, in the amount of time Java EE 6 has been final, Apple has pushed out a ton of releases with crap loads of new features, that people are using. I still see clear signs that there‘s hardly anyone using EE 6. Not being able to refresh the blood supply is one of the quickest routes to necrosis, Humpty…