Introducing Caching for Java Applications (Part 1)
Caching may address new challenges when developing performing applications.
Join the DZone community and get the full member experience.
Join For FreeAn increasing volume of critical data presents new challenges when developing performing applications with Java. Caching may address these challenges if applied right. This series of two articles explores ways for improving performance, concurrency and scalability in Java applications through caching.
Notion of Cache
A cache is an area of local memory that holds a copy of frequently accessed data that is otherwise expensive to get or compute. Examples of such data include a result of a query to a database, a disk file or a report. A typical interface for a Java cache provides access to the data using a unique key:
public interface Cache { Object get(final Object key);Object put(final Object key, final Object value);}
A cache works as the following: An application requests data from cache using a key. If the key is not found, the application retrieves the data from a slow data source and puts it into the cache. The next request for a key is serviced from the cache.
Improving Performance with Caching
Caching may provide significant performance improvement for a Java application, often in orders of magnitude. The performance improvement comes from serving hard to get data from the local memory of the application.
For example, consider an application that shows a 50-line system status report displayed on a web page after the user logs into the system. For each line it sends a complex SQL query to a database. To execute each query and to fetch results over the network takes 100 milliseconds on average. The total average time to collect all data for the page is about 5 seconds. Getting the same results from a cache takes about 5 microseconds on a modern 2GHz CPU. The performance improvement for this particular use scenario is 1,000,000!
Requirement for Temporal and Spatial Locality
A cache uses a part of the application's memory. That is why the size of the cache has to be small. A special algorithm should be used to remove (evict) data from the cache that has low chances of access. Such algorithm is called an eviction policy. To benefit from caching, the access to data should display properties of temporal and spatial locality. The data should be accessed often (temporal locality) and the probability of accessing a near cache element should be high (spatial locality). Increasing cache size for the data that satisfies this requirement increases hit/miss ratio.
The data from the example in the beginning of the article satisfies the requirement of temporal and spatial locality . Users log into the system around the same time and the number of items from the reports that are accessed in rapid succession is small.
Data that does not satisfy the requirement of temporal and spatial locality of access leads to faster eviction of cache elements and as a result will lower the number of cache hits and increase cache maintenance.
Cache Performance Characteristics
The main performance characteristic of a cache is a hit/miss ratio. The hit/miss ratio is calculated as number of cache hits divided by number of cache misses. The hit/miss ratio is calculated using hit and miss counters accumulated over a period of time. A high hit/miss ratio means that a cache performs well. A low hit/miss ratio means that the cache is applied to data that should not be cached. Also, the low hit/miss ratio may mean that a cache is too small to capture temporal locality of data access.
Cache Eviction Policy
A cache eviction policy is an algorithm according to which an existing element is removed from a cache when a new element is added. The eviction policy is applied to ensure that the size of the cache does not exceed a maximum size. Least Recently Used (LRU) is one of the most popular among a number of eviction policies. LRU earned its popularity for being the best in capturing temporal and spatial locality of data access.
A minor disadvantage or LRU is its sensitivity to a full scan. The sensitivity manifests itself in evicting accumulated frequently accessed cache elements when accessing data that does not satisfy the requirement of temporal locality. This disadvantage is minor because LRU recovers from full scans quickly.
A typical implementation of a cache with the LRU eviction policy consists of a map and a linked list. The map stores cached elements. The linked list keep tracks of the least recently used cache elements. When a cache element is updated, it is removed from the list and added to the top of the list. The new elements are added to the top of the list as well. If the cache grows bigger than its maximum size, an element is removed from the bottom of the list and from the map. This way the least recently used elements are evicted first.
Simple LRU Cache in Ten Lines
Java 5 added a class java.util.LinkedHashMap. The main goal of this class is to provide the key, value and entry iteration that are consistent with an order of entry. In addition, LinkedHashMap includes a provision for implementing a simple LRU cache. Below is an implementation of a simple LRU cache that uses only 10 lines of code:
import java.util.Map;import java.util.LinkedHashMap; public final class SimpleLRUCache extends LinkedHashMap{ private int maxSize; public SimpleLRUCache(final int maxSize) { super(1, 0.75f, true); this.maxSize = maxSize; } protected boolean removeEldestEntry(final Map.Entry eldest) { return size() > maxSize; } }
While this simple cache does evict least recently used elements, it is missing important features that a cache needs to have in order to be usable in a real application. Please see section Caching Products for Java for a detailed discussion.
Common Cache Use Scenarios
Common cache use scenarios include an application cache, a second level (L2) cache and a hybrid cache.
Application Cache
An application cache is a cache that an application accesses directly. An application benefits from using a cache by keeping most frequently accessed data in memory .
The following communication diagram illustrates using an application cache:
Level-2 Cache
One of the major use scenarios for a cache is a level-2 (L2) cache . An L2 cache provides caching services to an object-relational mapping (ORM) framework or a data mapping (DM) framework such as Hibernate or iBatis respectively. An L2 cache hides the complexity of the caching logic from an application.
An L2 cache improves performance of an ORM or DM framework by reducing unnecessary trips to the database .
The following communication diagram illustrates using an L2 cache:
The application does not access cache directly in this use scenario. Instead, the application utilizes a high level interface provided by an ORM or a DM framework. The framework uses cache for caching its internal data structures such as mapped objects and database query results. If the cached data is not available, the framework retrieves it from the database and puts it into the cache.
Hybrid Cache
A hybrid cache is a cache that uses an external data source to retrieve data that is not present in the cache. An application using a hybrid cache benefits from simplified programming of cache access.
This use scenario is different from the application or the second-level cache when an application or a data access framework is responsible for populating the cache in case of cache misses.
The following communication diagram illustrates using a hybrid cache:
Caching Anti-Patterns
Caching provides such a great improvement of performance that it is often used without limit. An anti-pattern Cache Them All is characterized by caching all data, without regard to temporal or spatial locality of data access . Cache Them All degrades application performance instead of improving it. The degradation of performance is caused by the overhead of maintaining a cache without benefiting from reduced cost of access to frequently used data.
To avoid the pitfall of the Cache Them All , only data that is hard to get and shows temporal and spatial locality of access should be cached.
Caching Products for Java
While it takes only 10 lines of code to write your own cache for Java, developing a usable cache is a challenging task. The simple 10-line cache is missing many important features required for using it in an application. Some of these features include concurrent access, configuration, eviction to disk and statistics.
Fortunately, several projects have already developed working caching APIs, so there is no need to reinvent the wheel.
A commercial caching API maybe a safe bet if your organization wants to be sure that its requests for help are addressed in time and that a caching API is developed professionally.
Often an organization cannot afford commercial software and is ready to take risks. In such situation a free caching API may be a solution.
The following sections outline some commercial and free product supporting local caching.
Commercial Products
Free Products
About Author
Slava Imeshev is a president and CEO of Cacheonix Systems. You can reach Slava at simeshev@cacheonix.com.
Opinions expressed by DZone contributors are their own.
Comments