Spring Cache: Profiling

DZone 's Guide to

Spring Cache: Profiling

Check out this interesting finding from the folks at popular profiling tool, Plumbr.

Free Resource

At Plumbr, we're constantly working on how software can be made faster and more reliable. The promise we've made to our customers is to avoid 100 million failures and save 100 million hours for their users per year.

While we're making things better for engineers around the world, we're also improving our software. Recently, I invested time in tuning the performance of one particular part of the Plumbr codebase. It is quite a tight loop, reading data from Kafka, performing several computations, and then writing data to a file. After several rounds of optimization, an unexpected code path started appearing on the profiler output.

It was unexpected because it was already considered 'optimized'. At the beginning, it went to DB asking for some relatively stable data. As it is clearly suboptimal, this method's invocation was wrapped in a cache using Spring Cache. Thus, it was a surprise to see it contributing a substantial portion of latency to the tight loop I was optimizing. This led me to investigate this further, and I'm sharing the findings here.

Let us take a look at a minimal example: https://github.com/iNikem/spring-cache-jmh.
We have a trivial method there:

public long annotationBased(String dummy) {
  return System.currentTimeMillis();

It is doing some work, which in here is just asking for a current time, and it is annotated with Spring's @Cacheable annotation. This will result in Spring wrapping this method in a proxy and caching the result of method invocation using method's input parameters as a key cache. Very straightforward and convenient optimization: you see a slow method, slap an annotation on it, configure your cache provider ( ehcache.xml in my case), and you can pat yourself on the back.

Compare it with the same end-result but using cache manually:

Much more going on, actual useful work is almost lost in accidental complexity of infrastructure-related boilerplate. Why should anyone prefer manual work to Spring's magic? The answer lies in this JMH benchmark and its results:

Benchmark                       Mode  Cnt    Score    Error  Units
CacheBenchmark.annotationBased  avgt    5  245.960 ± 27.749  ns/op
CacheBenchmark.manual           avgt    5   16.696 ±  0.496  ns/op
CacheBenchmark.nocache          avgt    5   44.586 ±  9.091  ns/op

As you can see, a custom solution for the specific problem runs 15 times faster than a general purpose one. The aim of this investigation is by no measure to accuse Spring of being a slow framework! The takeaway is "Spring is pre-emptively undertaking some heavy lifting to support a broader range of general-purpose use-cases." Note that we still speak about several hundreds of nanoseconds, which could be a negligible time difference in most scenarios. But in rare cases, when your actual profiling data shows that Spring Cache abstraction adds too much of overhead, don't be afraid to get your hands dirty and roll out custom solutions that are specifically tailored to meet your needs.

A farewell note — about the 'nocache' row from above. In this particular case, the actual work our method does is so small that adding caching actually slows it down. A perfect, albeit synthetical example of premature optimization: don't optimize anything until actual measurement proves the need for this. And then, don't forget to measure again after you optimize it. Cheers!

Cross posting from: https://medium.com/@nikem/is-spring-cache-abstraction-fast-enough-for-you-a6a5ea1542a9

For more interesting insights, you could follow me on Twitter @iNikem

java ,performance ,plumbr ,spring ,spring cache

Published at DZone with permission of Nikita Salnikov-Tarnovski , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}