Getting Started: Infinispan as Remote Cache Cluster
Join the DZone community and get the full member experience.
Join For Free
This guide will walk you through configuring and running Infinispan as a remote distributed cache cluster. There is straightforward documentation for running Infinispan in embedded mode. But there is no complete documentation for running Infinispan in client/server or remote mode. This guide helps bridge the gap.
Infinispan offers four modes of operation, which determine how and where the data is stored:
1. Download full distribution of Infinispan. I will use version 5.1.5.
2. Configure Infinispan to run in distributed mode. Create infinispan-distributed.xml.
We will use JGroups to setup cluster communication. Copy etc/jgroups-tcp.xml as jgroups.xml.
3. Place infinispan-distributed.xml and jgroups.xml in bin folder. Start 2 Infinispan instances on the same or different machines.
Starting an Infinispan server is pretty easy. You need to download and unzip the Infinispan distribution and use the startServer script.
The 2 server instances will start talking to each other via JGroups.
4. Create a simple Remote HotRod Java Client.
5. Define hotrod-client.properties.
See RemoteCacheManager for all available properties.
6. Run QuickStart.java. You will see something like this on the console
As you will notice, the cache server returns the cluster topology when the connection is established. You can start more Infinispan instances and notice that the cluster topology changes quickly.Infinispan offers four modes of operation, which determine how and where the data is stored:
- Local, where entries are stored on the local node only, regardless of whether a cluster has formed. In this mode Infinispan is typically operating as a local cache
- Invalidation, where all entries are stored into a cache store (such as a database) only, and invalidated from all nodes. When a node needs the entry it will load it from a cache store. In this mode Infinispan is operating as a distributed cache, backed by a canonical data store such as a database
- Replication, where all entries are replicated to all nodes. In this mode Infinispan is typically operating as a data grid or a temporary data store, but doesn't offer an increased heap space
- Distribution, where entries are distributed to a subset of the nodes only. In this mode Infinispan is typically operating as a data grid providing an increased heap space
Infinispan offers two access patterns, both of which are available in any runtime:
- Embedded into your application code
- As a Remote server accessed by a client (REST, memcached or Hot Rod)
1. Download full distribution of Infinispan. I will use version 5.1.5.
2. Configure Infinispan to run in distributed mode. Create infinispan-distributed.xml.
<infinispan >
<global>
<globaljmxstatistics >
<transport>
<properties> <property > </property></properties>
</transport>
</globaljmxstatistics>
</global>
<default>
<jmxstatistics > <clustering > <async> <hash > </hash></async></clustering>
</jmxstatistics>
</default>
<namedcache > <clustering > <sync> <hash > </hash></sync></clustering> </namedcache>
</infinispan>
We will use JGroups to setup cluster communication. Copy etc/jgroups-tcp.xml as jgroups.xml.
3. Place infinispan-distributed.xml and jgroups.xml in bin folder. Start 2 Infinispan instances on the same or different machines.
Starting an Infinispan server is pretty easy. You need to download and unzip the Infinispan distribution and use the startServer script.
bin\startServer.bat --help // Print all available options bin\startServer.bat -r hotrod -c infinispan-distributed.xml -p 11222 bin\startServer.bat -r hotrod -c infinispan-distributed.xml -p 11223
The 2 server instances will start talking to each other via JGroups.
4. Create a simple Remote HotRod Java Client.
import java.net.URL; import java.util.Map; import org.infinispan.client.hotrod.RemoteCache; import org.infinispan.client.hotrod.RemoteCacheManager; import org.infinispan.client.hotrod.ServerStatistics; public class Quickstart { public static void main(String[] args) { URL ().getContextClassLoader() .getResource("hotrod-client.properties"); RemoteCacheManager RemoteCacheManager(resource, true); //obtain a handle to the remote default cache RemoteCache ("myCache"); //now add something to the cache and make sure it is there cache.put("car", "ferrari"); if(cache.get("car").equals("ferrari")){ System.out.println("Found"); } else { System.out.println("Not found!"); } //remove the data cache.remove("car"); //Print cache statistics ServerStatistics (); for (Map.Entry stat : stats.getStatsMap().entrySet()) { System.out.println(stat.getKey() + " : " + stat.getValue()); } // Print Cache properties System.out.println(cacheContainer.getProperties()); cacheContainer.stop(); } }
5. Define hotrod-client.properties.
;localhost:11223; ## below is connection pooling config
See RemoteCacheManager for all available properties.
6. Run QuickStart.java. You will see something like this on the console
Jul 22, 2012 9:40:39 PM org.infinispan.client.hotrod.impl.protocol.Codec10 readNewTopologyAndHash INFO: ISPN004006: localhost/127.0.0.1:11223 sent new topology view (id=3) containing 2 addresses: [/127.0.0.1:11223, /127.0.0.1:11222] Found hits : 3 currentNumberOfEntries : 1 totalBytesRead : 332 timeSinceStart : 1281 totalNumberOfEntries : 8 totalBytesWritten : 926 removeMisses : 0 removeHits : 0 retrievals : 3 stores : 8 misses : 0 {, , , , , , , , ;localhost:11223;, , }
That's it!
Some useful links:
http://docs.jboss.org/infinispan/5.1/configdocs/
https://github.com/infinispan/infinispan-quickstart
https://github.com/infinispan/infinispan/tree/master/client/hotrod-client
https://docs.jboss.org/author/display/ISPN/Using+Hot+Rod+Server
https://docs.jboss.org/author/display/ISPN/Java+Hot+Rod+client
Infinispan
cluster
Cache (computing)
remote
Published at DZone with permission of Nishant Chandra, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments