DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Choosing the Right Caching Strategy
  • That Can Not Be Tested!: Spring Cache and Retry
  • How to Connect Redis Sentinel With Spring
  • Working With Spring Boot and Hazelcast (Distributed Cache)

Trending

  • Hybrid Cloud vs Multi-Cloud: Choosing the Right Strategy for AI Scalability and Security
  • Scaling InfluxDB for High-Volume Reporting With Continuous Queries (CQs)
  • Apache Doris vs Elasticsearch: An In-Depth Comparative Analysis
  • Rust and WebAssembly: Unlocking High-Performance Web Apps
  1. DZone
  2. Data Engineering
  3. Data
  4. A Way Around Cache Eviction

A Way Around Cache Eviction

A Zone Leader talks about leveraging a set of cache results without having to create another.

By 
John Vester user avatar
John Vester
DZone Core CORE ·
Jan. 17, 20 · Tutorial
Likes (7)
Comment
Save
Tweet
Share
52.1K Views

Join the DZone community and get the full member experience.

Join For Free

Find your own way around!

For decades, developers and designers have tried to optimize the performance of the systems and services they are providing:

  • Operating systems in search of the fastest start-up time.
  • Browsers trying to out-perform their competition.
  • APIs doing their job as quickly as possible.

Aside from throwing more processing power and memory at the solution or optimized source code design. The use of some form of cache is often introduced to give an additional boost in speed. The more that caching is implemented, the higher the probability that those caches need to be cleared, or evicted.

You may also like: Cache in Java With LRU Eviction Policy

In this article, I will talk about one approach that will provide that much-needed performance boost without having to worry about the cache becoming out of date.

The Font Scenario

For my example, consider yourself working in the publishing industry — creating a client that will give end-users the ability to author publishable content. For the customer to be able to do their job, they will rely on fonts that are controlled by the application.  

Based upon the particular project, the list of fonts available will vary. In this case, let's assume that there is some cost associated with each font that is utilized.  

While the list of fonts often remains static once the project is set, there is a 20% possibility every project will see a change in the list of fonts — which can happen at any point during the project lifecycle.

Knowing that the payload related to the Font information is very static, this data was an obvious choice for caching in an API.

Russell's Inspiration

While talking about this situation to my colleague Russell ("I Want My Code to Be Boring"), we got to the situation that would require the cache for a given set of fonts used in a project to become outdated. The caching strategy we were employing in Spring Boot was Ehcache and the concept of cache eviction was the approach I was planning to utilize.

By the time I had talked with Russell, there were already cache items in place for a given font payload by the unique ID associated with each font. I already was caching the actual binary font files that needed to be distributed to each client utilizing the application. In both cases, the impact on the response time was quite impressive — seeing a reduction from 500 milliseconds down to 10 milliseconds.

In early tests, the fonts by project API call was seeing a reduction from 1,700 milliseconds down to < 20 milliseconds — since the required data was served directly from cache. A cache that was needing to be evicted the second the configuration for the project was updated.

Russell posed the idea, "what would happen if we did not cache the fonts by project API, but had that API utilize the existing caches instead?"

Utilizing An Existing Cache To Avoid Cache Management

The approach being employed is illustrated in the simplified source code example below:

Java
 




x
23


 
1
List<Font> fonts = new ArrayList<>();
2
List<Long> fontIds = fontRepository.getFontIdByProjectId(projectId);
3
 
          
4
if (CollectionUtils.isNotEmpty(fontIds)) {
5
  Cache cache;
6
 
          
7
  for (Long id : fontIds) {
8
    cache = cacheManager.getCache("fontById");
9
    Cache.ValueWrapper valueWrapper = cache.get(id);
10
 
          
11
    if (valueWrapper != null) {
12
      log.debug("Using cache id={} in fontById cache", id);
13
      fonts.add((Font) valueWrapper.get());
14
    } else {
15
      Font font = getFontById(projectId, id); // Get font not in cache
16
      log.debug("Could not locate id={} in fontById cache, adding {}", id, font);
17
      cache.put(id, font);  // Add font into existing cache
18
      fonts.add(font);
19
    }
20
  }  
21
}
22
 
          
23
return fonts;



When the fonts by project method are launched, a list of current font IDs associated with the project is returned as a List<Long>. At that point, each ID is used to check the "fontById" cache to locate a match. If the match is found, the static font cache is simply added to the List<Font> that will be returned to the client.  

If a match is not found, the Font is retrieved using the same method that populates that cache and the resulting data is added to the cache — ready for future use. Of course, that newly retrieved Font data is added to the List<Font> that will be returned to the client, as well.

In taking this approach, the performance was not in the < 20-millisecond range, but is in the < 35-millisecond range — still a major improvement over the original 1,700 milliseconds (98% improvement). 

Additionally, the following benefits are also recognized here:

  • The need to create an additional cache is avoided.
  • There is no need to introduce an entity listener and/or custom code to evict (or clear) the cache.
  • The approach is very supportable — avoiding the use of aspect-oriented programming (AOP).

Conclusion

In my "Three Things We Do In IT That Don't Match Reality" article, I talked about how a change in your thought pattern can lead to performance improvements in application response time. The approach noted was to look-ahead and retrieve all the information necessary before the program logic is started. Then, using some form of collection, the data needed down the processing cycle would already be available — without the need to further query the database.

This approach seems like the exact opposite, with an approach that is asking for data on the fly when a list of IDs is known. In this case, there is also a boost in performance, since the reality is that a majority of this information will already be in the cache and result in a return response time that shrinks from one to two seconds to ten to twenty milliseconds.

In both cases, two approaches that seem vastly different end up leading to performance gains in the application.

I continue to appreciate my conversations with Russell, as his approach provides positive results for the API while keeping support costs for the solution at a minimum.

Have a great day!


Further Reading

Spring: Serving Multiple Requests With the Singleton Bean

Spring Cache and Integration Testing

Quick Start: How to Use Spring Cache on Redis

Cache (computing) Spring Framework

Opinions expressed by DZone contributors are their own.

Related

  • Choosing the Right Caching Strategy
  • That Can Not Be Tested!: Spring Cache and Retry
  • How to Connect Redis Sentinel With Spring
  • Working With Spring Boot and Hazelcast (Distributed Cache)

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!