DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
View Events Video Library
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service

Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.

Monitoring and Observability for LLMs: Datadog and Google Cloud discuss how to achieve optimal AI model performance.

Automated Testing: The latest on architecture, TDD, and the benefits of AI and low-code tools.

Related

  • API and Database Performance Optimization Strategies
  • Building the Next-Generation Data Lakehouse: 10X Performance
  • Providing Enum Consistency Between Application and Data
  • Designing High-Volume Systems Using Event-Driven Architectures

Trending

  • How TIBCO Is Evolving Its Platform To Embrace Developers and Simplify Cloud Integration
  • Spring Boot and React in Harmony
  • Multi-Tenancy With Keycloak, Angular, and SpringBoot
  • The Stairway to Apache Kafka® Tiered Storage
  1. DZone
  2. Data Engineering
  3. Data
  4. High Performance JPA with GlassFish and Coherence - Part 3

High Performance JPA with GlassFish and Coherence - Part 3

Markus Eisele user avatar by
Markus Eisele
·
Mar. 07, 11 · Interview
Like (0)
Save
Tweet
Share
7.85K Views

Join the DZone community and get the full member experience.

Join For Free

In this third part of my four part series I'll explain strategy number two of using Coherence with EclipseLink and GlassFish. This is all about using Coherence as Second Level Cache (L2) with EclipseLink.

General approach

This approach applies Coherence data grid to JPA applications that rely on database hosted data that cannot be entirely pre-loaded into a Coherence cache. Some reasons why it might not be able to pre-loaded include extremely complex queries that exceed the feature set of Coherence Filters, third party database updates that create stale caches, reliance on native SQL queries, stored procedures or triggers, and so on. This is not only an option for local L2 Caches but with additional configured Coherence instances on different nodes, you also get a cluster-wide JPA L2 Cache.

Details

As with many Caches, this is a read-mostly optimization. Primary key queries attempt to get entities first from Coherence and, if unsuccessful, will query the database, updating Coherence with query results. Non-primary key queries are executed against the database and the results checked against Coherence to avoid object construction costs for cached entities. Newly queried entities are put into Coherence. Write operations update the database and, if successfully committed, updated entities are put into Coherence. This approach is called "Grid Cache" in the Coherence documentation.

Move it into practice

Start with the previous blog-post and prepare your environment, if you have not already done so. There is a single thing, you need to change. Go back to GlassFish 3.0.1 / EclipseLink 2.0.1 for this scenario as there is a problem with the CacheKey.getKey() method. The 2.0.1 returns a Vector, the 2.2.0 simply returns an Object. Seeing the new Oracle GlassFish Server 3.1 having support for ActiveCache, I expect this to be fixed with the 3.7 Coherence release. But until than, you have to stick to the old GF or EclipseLink.

Anyway, lets create a new web project with your favorite IDE (e.g. GridCacheExample). Add the required libraries (coherence.jar, toplink-grid.jar and the eclipselink.jar). Now let's create our entity class and add the extra @CacheInterceptor annotation to it:

...
import oracle.eclipselink.coherence.integrated.cache.CoherenceInterceptor;
import org.eclipse.persistence.annotations.CacheInterceptor;
...

@Entity
@CacheInterceptor(value = CoherenceInterceptor.class)
public class Employee implements Serializable {
...
}

Don't forget to add the @GeneratedValue(strategy= GenerationType.SEQUENCE) as this is needed in opposite to the last example. After this is done, you have to add the coherence configuration to your WEB-INF/classes folder. You can start from the tutorial (Example 2). (be careful, there is a typo in it ... a dublicate </backing-map-scheme> tag). Configure your persistence.xml as you would do with a normal JPA based application.

<persistence-unit name="GridCacheExamplePU" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>jdbc/coherence</jta-data-source>
<properties>
<property name="eclipselink.ddl-generation" value="drop-and-create-tables"/>
<property name="eclipselink.logging.level" value="FINE" />
</properties>
</persistence-unit>

That's it basically. Now you can test your new L2 cache. A simple servlet should do the trick:

public class InsertServletPart3 extends HttpServlet {
@PersistenceUnit(unitName = "GridCacheExamplePU")
EntityManagerFactory emf;
@Resource
UserTransaction tx;
...

EntityManager em = emf.createEntityManager();
tx.begin();
// some loop magic
Employee employee = new Employee();
employee.setFirstName("Markus");
employee.setLastName("Eisele");
em.persist(employee);
// some loop magic end
tx.commit();
em.close();

If you watch the log, you can see something like this:

FEIN: INSERT INTO EMPLOYEE (LASTNAME, FIRSTNAME) VALUES (?, ?)
 bind => [Eisele, Markus]
...
FEIN: Coherence(Employee)::Put: 1 value: net.eisele.coherence.entities.Employee[ id=1 ]
...

Which basically tells you, that the actual database insert is carried out by EclipseLink as you are used to. After that, you can see, that the Employee object is put to the Coherence Cache named Employee with the PK as it's key.

If you now issue a query against the database

em.createQuery("select e from Employee e where e.lastName = :lastName").setParameter("lastName", "Eisele").getResultList()

you see the following:

FEIN: SELECT ID, LASTNAME, FIRSTNAME FROM EMPLOYEE WHERE (LASTNAME = ?)
 bind => [Eisele]
FEIN: Coherence(Employee)::Get: 1 result: net.eisele.coherence.entities.Employee[ id=1 ]
FEIN: Coherence(Employee)::Put: 1 value: net.eisele.coherence.entities.Employee[ id=1 ]
...

Which tells you, that the query itself is issued against the database but the results checked against Coherence to avoid object construction already for cached entities. Newly queried entities are put into Coherence. If you issue a simple PK query:

 

em.find(Employee.class, 1);

the output changes to:

FEIN: Coherence(Employee)::Get: 1 result: net.eisele.coherence.entities.Employee[ id=1 ]

and you don't see any db query at all. That's it :) Your cache works! Thanks for reading. Stay tuned for the next part!

Further Reading

OTN How-To: Using Coherence as a Shared L2 Cache

Integration Guide for Oracle TopLink with Coherence Gird 11g Release 1 (11.1.1)

 

From http://blog.eisele.net/2011/03/high-performance-jpa-with-glassfish-and.html

Coherence (units of measurement) Database GlassFish Cache (computing)

Opinions expressed by DZone contributors are their own.

Related

  • API and Database Performance Optimization Strategies
  • Building the Next-Generation Data Lakehouse: 10X Performance
  • Providing Enum Consistency Between Application and Data
  • Designing High-Volume Systems Using Event-Driven Architectures

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: