Two-Phase-Commit for Distributed In-Memory Caches
Two-Phase-Commit for Distributed In-Memory Caches
Join the DZone community and get the full member experience.Join For Free
2-Phase-Commit is probably one of the oldest consensus protocols and is known for its deficiencies when it comes to handling failures, as it may indefinitely block the servers waiting in prepare state. To mitigate this, a 3-Phase-Commit protocol was introduced which adds better fault tolerance at the expense of extra network round-trip message and higher latencies.
I would like to extend these traditional quorum concepts into distributed in-memory caching, as it is particularly relevant to what we do at GridGain. GridGain too has 2-phase-commit transactions, but unlike disk-based persistent systems, GridGain does not need to add a 3rd phase to the commit protocol in order to preserve data consistency during failures.
We want to avoid the 3-Phase-Commit protocol because it adds an additional network round-trip and has a negative impact on latencies and performance.
In GridGain, the data is partitioned in memory, which in the purest form means that every key-value pair is assigned to a specific node within the cluster. If for example, we have 100 keys and 4 nodes, then every node will cache 100 / 4 = 25 keys. This way the more nodes we have, the more data we can cache. Of course, in real life scenarios, we also have to worry about failures and redundancy, so for every key-value pair we will have 1 primary copy, and 1 or more backup copies.
Let us now look at the 2-Phase-Commit protocol in more detail to see why we can avoid the 3rd commit phase in GridGain.
Generally the 2-Phase-Commit protocol is initiated whenever an application has already made a decision to commit the transaction. The Coordinator node sends a Prepare message to all the participating nodes holding primary copies (primary nodes), and the primary nodes, after acquiring the necessary locks, synchronously send the "Prepare" message to the nodes holding backup copies (backup nodes). Once every node votes "Yes", then the "Commit" message is sent and the transaction gets committed.
Advantages of In-Memory
So, why does in-memory-only processing allow us to avoid the 3rd phase as opposed to persistent disk-based transactions? The main reason is that the in-memory cache is purely volatile and does not need to worry about leaving any state behind in case of a crash. Contrast that to the persistent architectures, where we need to worry whether the state on disk was committed or not after a failure had occurred.
Another advantage of in-memory-only distributed cache is that the only valid vote for the "Prepare" message is "Yes". There is really no valid reason for any cluster member to vote "No". This essentially means that the only reason a rollback should happen is if the Coordinator node crashed before it was able to send the "Prepare" message to all the participating nodes.
Now, that we have the ground rules settled, let's analyze what happens in case of failures:
Backup Node Failures
If a backup node fails during either "Prepare" phase or "Commit" phase, then no special handling is needed. The data will still be committed on the nodes that are alive. GridGain will then, in the background, designate a new backup node and the data will be copied there outside of the transaction scope.
Primary Node Failures
If a primary node fails before or during the "Prepare" phase, then the coordinator will designate one of the backup nodes to become primary and retry the "Prepare" phase. If the failure happens before or during the "Commit" phase, then the backup nodes will detect the crash and send a message to the Coordinator node to find out whether to commit or rollback. The transaction still completes and the data within distributed cache remains consistent.
Coordinator Node Failures
Failure of the Coordinator node becomes a little tricky. Without the Coordinator node, neither primary nodes, nor backup nodes know whether to commit the transaction or to rollback. In this case the "Recovery Protocol" initiates, and all the nodes participating in the transaction send a message to every other participating node asking whether the "Prepare" message was received. If at least one of the nodes replied "No", then the transaction will be rolled back, otherwise the transaction will be committed. Note that after all the nodes received the "Prepare" message, we can safely commit, since voting "No" is impossible during the "Prepare" phase as stated above.
It is possible that some nodes will have already committed the transaction by the time they have received the Recovery Protocol message from other nodes. For such cases, every node keeps a backlog of completed transaction IDs for a certain period of time. If there is no ongoing transaction with given ID found, then the backlog is checked. If the transaction was not found in the backlog, then it was never started, which means that the failure had occurred before the Prepare phase was completed and it is safe to rollback.
Since the data is volatile, and does not leave any state after the crash, it does not really matter if any of the primary or backup nodes also crashed together with the coordinator node. The Recovery Protocol still works the same.
Note that we are able to fully recover the transaction without introducing the 3rd commit phase mainly because the data in distributed in-memory caches is volatile and node crashes do not leave any state behind.
The important advantage of the 2-Phase-Commit Recovery Protocol in GridGainover the 3-Phase-Commit is that, in the absence of failures, the recovery protocol does not introduce any additional overhead, while the 3-phase-commit adds an additional synchronous network roundtrip to the transaction and, therefore, has negative impact on performance and latencies.
GridGain also supports modes where it works with various persistent stores including transactional databases. In my next blog, I will cover the scenario where a cache sits on top of a transactional persistent store, and how the 2-Phase-Commit protocol works in that case.
Published at DZone with permission of Dmitriy Setrakyan , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.