Over a million developers have joined DZone.

Eager Optimization Is The Enemy

This article takes a stand against eager optimization, and explains why it should be left to the wayside. Which side are you on?

· Java Zone

Microservices! They are everywhere, or at least, the term is. When should you use a microservice architecture? What factors should be considered when making that decision? Do the benefits outweigh the costs? Why is everyone so excited about them, anyway?  Brought to you in partnership with IBM.

I am officially declaring war on the eager optimization crowd. You may be part of it and not even know it. Let’s take this comment from a DZone article I posted a while back:

“But if you already know something is slow and how to write it correctly then that is not premature optimisation, that is smart coding. “

See, this guy thinks he isn’t part of the eager optimization crowd. Because he doesn’t do it, unless he knows that a certain method or algorithm is “slow” and needs to be written performantly. Then he’ll write it performantly from the off.

This is eager optimization dammit.

This is what results in codebases with outrageously complicated code that can be replaced with a for-loop. Because you “know better”. I’m here to tell you, you don’t.

Measure everything

Let’s go back to the start. When you’re writing a piece of code, whatever it is, you will have performance concerns. Perhaps it’s part of a batch job that runs overnight, in which case it can probably go as slow as you want. Counterpoint, it could be a dependency as part of a super low latency system. Or maybe it’s returning a response to a user on a webpage.

These are three very differing requirements. The important thing is that for each of them you can attach some sort of numbers to it:

  • Total Batch needs to complete in under 7 hours

  • Super Low Latency system needs a response in under 1ms

  • Webpage needs a response in under 300ms

The beauty of this is that you have an actual number to target. This means you can empirically prove if you’re doing the job or not. When people start waffling to me about needing a complex algorithm for a piece of work to go fast enough, I can simply say one thing:

Prove it.

There are genuinely times when you need your outrageously complex algorithm from a textbook. But these are rare. Normally a bunch of for-loops and/or hashmaps will do the job perfectly well. The inbuilt Java algorithms for searching aren’t bad.

I’m a massive proponent of TDD. TDD dictates to implement the simplest, most stupid thing to get the test to pass and then go from there. This is absolutely how you should approach any potential complexity of your code.

First, write the simplest, cleanest thing you can to make it functional. Cleanliness is the most important goal in code (along with ensuring it works). Clean, well tested code results in long term maintainable systems. Use the built in Java libraries where possible, as the consumer of your code will understand these at a minimum, and as I mentioned before, they’re usually pretty good and certainly well tested.

Now, test the performance. Actually, measure it against your target. In my experience the majority of the time you’ll hit your measure and can move on with your life.

Now I can almost guarantee the outrage in the comments. And whilst I look forward to reading them, I encourage the rest of you to give this a go. Don’t optimize anything at the beginning of your coding. Test. Improve.

The other benefit is that your tests will be even nicer. By using simple implementations it will help you generate a clean API for your consumers (whether that’s yourself or a third party). If you do need to improve your code (and that’s totally ok!) you’ve got a full suite of working tests you can ensure you conform to, to guide you through the process.

The biggest culprit for me is in search. I’m not talking about massive datasets (which would probably be in an appropriate dataset), but instead in memory- things like lists and maps that are stored intra-application. The amount of times I’ve seen people implement their own overly complex caching algorithms on top which are hard to understand and often don’t work makes me cry. Usually, they can be ripped out and replaced with a for loop with no detriment (and often with performance gain).

If you really do need a performance boost then never forget there are people out there who are smarter than you who have implemented this stuff and have been tested by the community it. It’ll be easier to understand and a lot more likely to work. Google Collections are your friend. You don’t need to reinvent the wheel in an attempt to be more performant. No one will thank you for it.

Discover how the Watson team is further developing SDKs in Java, Node.js, Python, iOS, and Android to access these services and make programming easy. Brought to you in partnership with IBM.

performance ,clean code

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}