{{announcement.body}}
{{announcement.title}}

A Way Around Cache Eviction

DZone 's Guide to

A Way Around Cache Eviction

A Zone Leader talks about leveraging a set of cache results without having to create another.

· Microservices Zone ·
Free Resource

Find your own way around!

For decades, developers and designers have tried to optimize the performance of the systems and services they are providing:

  • Operating systems in search of the fastest start-up time.
  • Browsers trying to out-perform their competition.
  • APIs doing their job as quickly as possible.

Aside from throwing more processing power and memory at the solution or optimized source code design. The use of some form of cache is often introduced to give an additional boost in speed. The more that caching is implemented, the higher the probability that those caches need to be cleared, or evicted.

You may also like: Cache in Java With LRU Eviction Policy

In this article, I will talk about one approach that will provide that much-needed performance boost without having to worry about the cache becoming out of date.

The Font Scenario

For my example, consider yourself working in the publishing industry — creating a client that will give end-users the ability to author publishable content. For the customer to be able to do their job, they will rely on fonts that are controlled by the application.  

Based upon the particular project, the list of fonts available will vary. In this case, let's assume that there is some cost associated with each font that is utilized.  

While the list of fonts often remains static once the project is set, there is a 20% possibility every project will see a change in the list of fonts — which can happen at any point during the project lifecycle.

Knowing that the payload related to the Font information is very static, this data was an obvious choice for caching in an API.

Russell's Inspiration

While talking about this situation to my colleague Russell ("I Want My Code to Be Boring"), we got to the situation that would require the cache for a given set of fonts used in a project to become outdated. The caching strategy we were employing in Spring Boot was Ehcache and the concept of cache eviction was the approach I was planning to utilize.

By the time I had talked with Russell, there were already cache items in place for a given font payload by the unique ID associated with each font. I already was caching the actual binary font files that needed to be distributed to each client utilizing the application. In both cases, the impact on the response time was quite impressive — seeing a reduction from 500 milliseconds down to 10 milliseconds.

In early tests, the fonts by project API call was seeing a reduction from 1,700 milliseconds down to < 20 milliseconds — since the required data was served directly from cache. A cache that was needing to be evicted the second the configuration for the project was updated.

Russell posed the idea, "what would happen if we did not cache the fonts by project API, but had that API utilize the existing caches instead?"

Utilizing An Existing Cache To Avoid Cache Management

The approach being employed is illustrated in the simplified source code example below:

Java




x
23


 
1
List<Font> fonts = new ArrayList<>();
2
List<Long> fontIds = fontRepository.getFontIdByProjectId(projectId);
3
 
          
4
if (CollectionUtils.isNotEmpty(fontIds)) {
5
  Cache cache;
6
 
          
7
  for (Long id : fontIds) {
8
    cache = cacheManager.getCache("fontById");
9
    Cache.ValueWrapper valueWrapper = cache.get(id);
10
 
          
11
    if (valueWrapper != null) {
12
      log.debug("Using cache id={} in fontById cache", id);
13
      fonts.add((Font) valueWrapper.get());
14
    } else {
15
      Font font = getFontById(projectId, id); // Get font not in cache
16
      log.debug("Could not locate id={} in fontById cache, adding {}", id, font);
17
      cache.put(id, font);  // Add font into existing cache
18
      fonts.add(font);
19
    }
20
  }  
21
}
22
 
          
23
return fonts;



When the fonts by project method are launched, a list of current font IDs associated with the project is returned as a List<Long>. At that point, each ID is used to check the "fontById" cache to locate a match. If the match is found, the static font cache is simply added to the List<Font> that will be returned to the client.  

If a match is not found, the Font is retrieved using the same method that populates that cache and the resulting data is added to the cache — ready for future use. Of course, that newly retrieved Font data is added to the List<Font> that will be returned to the client, as well.

In taking this approach, the performance was not in the < 20-millisecond range, but is in the < 35-millisecond range — still a major improvement over the original 1,700 milliseconds (98% improvement). 

Additionally, the following benefits are also recognized here:

  • The need to create an additional cache is avoided.
  • There is no need to introduce an entity listener and/or custom code to evict (or clear) the cache.
  • The approach is very supportable — avoiding the use of aspect-oriented programming (AOP).

Conclusion

In my "Three Things We Do In IT That Don't Match Reality" article, I talked about how a change in your thought pattern can lead to performance improvements in application response time. The approach noted was to look-ahead and retrieve all the information necessary before the program logic is started. Then, using some form of collection, the data needed down the processing cycle would already be available — without the need to further query the database.

This approach seems like the exact opposite, with an approach that is asking for data on the fly when a list of IDs is known. In this case, there is also a boost in performance, since the reality is that a majority of this information will already be in the cache and result in a return response time that shrinks from one to two seconds to ten to twenty milliseconds.

In both cases, two approaches that seem vastly different end up leading to performance gains in the application.

I continue to appreciate my conversations with Russell, as his approach provides positive results for the API while keeping support costs for the solution at a minimum.

Have a great day!


Further Reading

Spring: Serving Multiple Requests With the Singleton Bean

Spring Cache and Integration Testing

Quick Start: How to Use Spring Cache on Redis

Topics:
api design ,spring boot ,caching ,ehcache ,java ,performance ,microservices

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}