Over a million developers have joined DZone.
Platinum Partner

Infinispan as Remote Cache Cluster

· Java Zone

The Java Zone is brought to you in partnership with ZeroTurnaround. Discover how you can skip the build and redeploy process by using JRebel by ZeroTurnaround.

This guide will walk you through configuring and running Infinispan as a remote distributed cache cluster. 

Infinispan offers four modes of operation, which determine how and where the data is stored:

  • Local, where entries are stored on the local node only, regardless of whether a cluster has formed. In this mode Infinispan is typically operating as a local cache

  • Invalidation, where all entries are stored into a cache store (such as a database) only, and invalidated from all nodes. When a node needs the entry it will load it from a cache store. In this mode Infinispan is operating as a distributed cache, backed by a canonical data store such as a database.

  • Replication, where all entries are replicated to all nodes. In this mode Infinispan is typically operating as a data grid or a temporary data store, but doesn't offer an increased heap space.

  • Distribution, where entries are distributed to a subset of the nodes only. In this mode Infinispan is typically operating as a data grid providing an increased heap space.

Invalidation, Replication and Distribution can all use synchronous or asynchronous communication.

Infinispan offers two access patterns, both of which are available in any runtime:

  • Embedded into your application code

  • As a Remote server accessed by a client (REST, memcached or HotRod)

In this article, we will configure an Infinispan server with a HotRod endpoint and access it via a Java HotRod client. One reason to use HotRod protocol is it provides automatic loadbalancing and failover.

1. Download full distribution of Infinispan Version 5.1.5.

2. Configure Infinispan to run in distributed mode. Create infinispan-distributed.xml.

<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xmlns="urn:infinispan:config:5.1" xsi:schemalocation="urn:infinispan:config:5.1 
<global> <globaljmxstatistics enabled="true"> <transport> <properties> <property name="configurationFile" value="jgroups.xml"> </property></properties> </transport> </globaljmxstatistics></global> <default> <jmxstatistics enabled="true"> <clustering mode="distribution"> <async> <hash numowners="2"> </hash></async></clustering> </jmxstatistics></default> <namedcache name="myCache"> <clustering mode="distribution"> <sync> <hash numowners="2"> </hash></sync></clustering> </namedcache> </infinispan>

We will use JGroups to set up the cluster communication. Copy etc/jgroups-tcp.xml as jgroups.xml.

3. Place infinispan-distributed.xml and jgroups.xml in bin folder. Start 2 Infinispan instances on the same or different machines.

Starting an Infinispan server is pretty easy. You need to download and unzip the Infinispan distribution and use the startServer script.

bin\startServer.bat --help // Print all available options
bin\startServer.bat -r hotrod -c infinispan-distributed.xml -p 11222
bin\startServer.bat -r hotrod -c infinispan-distributed.xml -p 11223

The 2 server instances will start talking to each other via JGroups.

4. Create a simple Remote HotRod Java Client.

import java.net.URL;
import java.util.Map;

import org.infinispan.client.hotrod.RemoteCache;
import org.infinispan.client.hotrod.RemoteCacheManager;
import org.infinispan.client.hotrod.ServerStatistics;

public class Quickstart {

 public static void main(String[] args) {

  URL resource = Thread.currentThread().getContextClassLoader()
  RemoteCacheManager cacheContainer = new RemoteCacheManager(resource, true);

  //obtain a handle to the remote default cache
  RemoteCache cache = cacheContainer.getCache("myCache");

  //now add something to the cache and make sure it is there
  cache.put("car", "ferrari");
   System.out.println("Cache Hit!");
  } else {
   System.out.println("Cache Miss!");

  //remove the data

 5. Define hotrod-client.properties.
infinispan.client.hotrod.server_list = localhost:11222;localhost:11223;
infinispan.client.hotrod.socket_timeout = 500
infinispan.client.hotrod.connect_timeout = 10

## below is connection pooling config
maxTotal = -1
maxIdle = -1
whenExhaustedAction = 1
testWhileIdle = true
minIdle = 1

See RemoteCacheManager for all available properties.

6. Run QuickStart.java. You will see something like this on the console:

Jul 22, 2012 9:40:39 PM org.infinispan.client.hotrod.impl.protocol.Codec10 
INFO: ISPN004006: localhost/ sent new topology view (id=3) 
containing 2 addresses: [/, /]

hits : 3
currentNumberOfEntries : 1
totalBytesRead : 332
timeSinceStart : 1281
totalNumberOfEntries : 8
totalBytesWritten : 926
removeMisses : 0
removeHits : 0
retrievals : 3
stores : 8
misses : 0
{whenExhaustedAction=1, maxIdle=-1, infinispan.client.hotrod.connect_timeout=10, 
maxActive=-1, testWhileIdle=true, minEvictableIdleTimeMillis=1800000, maxTotal=-1, 
minIdle=1, infinispan.client.hotrod.server_list=localhost:11222;localhost:11223;, 
timeBetweenEvictionRunsMillis=120000, infinispan.client.hotrod.socket_timeout=500

As you will notice, the cache server returns the cluster topology when the connection is established. You can start more Infinispan instances and you should that the cluster topology changes quickly.

That's it!

The Java Zone is brought to you in partnership with ZeroTurnaround. Discover how you can skip the build and redeploy process by using JRebel by ZeroTurnaround.


Published at DZone with permission of Nishant Chandra , DZone MVB .

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}