Infinispan for the Power-user: Remoting
Infinispan for the Power-user: Remoting
Join the DZone community and get the full member experience.Join For Free
Sensu is an open source monitoring event pipeline. Try it today.
As a follow-up to an earlier article introducing Infinispan, which was published on DZone some months back, I'm starting a series of “Power User” articles – a series of short, focused articles on specific sub-sections of Infinispan, for those who want to dig deeper into Infinispan internals. This article aims to pick out specific topics, of interest to power users, and discuss them to some length. The series assumes some prior knowledge of Infinispan, including having read the introductory article, having downloaded Infinispan and having played with some of the demos included in the distribution, as well as having worked through some of the interactive tutorials available on Infinispan's wiki.
Read the other parts in this series:Part 1 - Remoting
Part 2 - Cache Modes
Part 3 - Event Notifications
Further, familiarity with Infinispan's primary Cache API is also assumed, and as such is not covered. Familiarity with creating and starting cache nodes, both programmatically and declaratively, is also assumed.
For this first part, we will focus on remoting: how Infinispan nodes talk to each other.
As a data grid and a distributed cache, Infinispan nodes make use of IP-based networking to communicate with its neighbours. Specifically, Infinispan nodes issue RPCs – remote procedure calls – to neighbouring nodes, which are invoked on a target node and a response is returned, in many ways similar to a local invocation. Infinispan's RPC framework makes use of instances of the ReplicableCommand interface, which are capable of being executed on any target node – identified by an Address – or broadcast to the entire grid.
This RPC framework – the ReplicableCommand implementations, the Address of a node, and a node's Response – form the heart of Infinispan's network layer, facilitated by the RpcManager, and underneath the RpcManager, the Transport.
The RpcManager is a helper component that organises, optimises, and issues RPCs, and optionally waits for, collects, and parses Responses. Transports are encapsulations of the interaction with sockets and streams. Infinispan currently ships with a single Transport implementation – a JGroupsTransport, which makes use of JGroups' rich and diverse set of protocols and guarantees.
Certain networking components such as the RpcManager and the Transport are globally scoped, i.e., a single instance of these components are shared among all caches created by a single CacheManager. This aids in resource reuse, multiplexing calls over the same channel, and helps make Cache instances lightweight constructs, but it means that you cannot use a differently tuned network stack for different Cache instances created by the same CacheManager. However different Cache instances can be configured to use different CacheModes, e.g., CacheMode.LOCAL, CacheMode.DIST_SYNC, etc.
The following diagram represents the relationship between these components.
Your network stack can be tuned by providing a JGroups configuration file and pointing to it in your Infinispan configuration:
<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:infinispan:config:4.0">
<transport clusterName="myCluster" nodeName="nodeA">
<property name="configurationFile" value="jgroups.xml" />
Infinispan ships with two default JGroups configuration files, one for TCP and one for UDP. These are good starting points from where you can tune and customise your JGroups configuration.
JGroups can be tuned to use either UDP or TCP as a transport, and various forms of discovery including UDP multicast, or cloud-compliant FILE_PING (or S3_PING) for environments where multicast is not available. Please refer to JGroups website for details on tuning JGroups further
For a quick intro on getting an Infinispan cluster up and running, please refer to this tutorial.
Opinions expressed by DZone contributors are their own.