Lose the Lock While Embracing Concurrency
Adopting concurrency? You may want to lose the lock. Here's a look at basics with message routing, such as concurrent timelines, linearizability, and more!
Join the DZone community and get the full member experience.Join For Free
providing robust message routing was a priority for us at workiva when building our distributed messaging infrastructure. this encompassed directed messaging, which allows us to route messages to specific endpoints based on service or client identifiers, but also topic fan-out with support for wildcards and pattern matching.
existing message-oriented middleware, such as rabbitmq, provide varying levels of support for these but don’t offer the rich features needed to power wdesk. this includes transport fallback with graceful degradation, tunable qualities of service, support for client-side messaging, and pluggable authentication middleware. as such, we set out to build a new system, not by reinventing the wheel, but by repurposing it.
eventually, we settled on apache kafka as our wheel, or perhaps more accurately, our log. kafka demonstrates a telling story of speed, scalability, and fault tolerance—each a requisite component of any reliable messaging system—but it’s only half the story. pub/sub is a critical messaging pattern for us and underpins a wide range of use cases, but kafka’s topic model isn’t designed for this purpose. one of the key engineering challenges we faced was building a practical routing mechanism by which messages are matched to interested subscribers. on the surface, this problem appears fairly trivial and is far from novel, but it becomes quite interesting as we dig deeper.
back to basics
topic routing works by matching a published message with interested subscribers. a consumer might subscribe to the topic “foo.bar.baz,” in which any message published to this topic would be delivered to them. we also must support * and # wildcards, which match exactly one word and zero or more words, respectively. in this sense, we follow the amqp spec :
the routing key used for a topic exchange must consist of zero or more words delimited by dots. each word may contain the letters a–z and a–z and digits 0–9. the routing pattern follows the same rules as the routing key with the addition that * matches a single word, and # matches zero or more words. thus the routing pattern *.stock.# matches the routing keys usd.stock and eur.stock.db but not stock.nasdaq.
this problem can be modeled using a trie structure. rabbitmq went with this approach after exploring other options , like caching topics and indexing the patterns or using a deterministic finite automaton. the latter options have greater time and space complexities. the former requires backtracking the tree for wildcard lookups.
the subscription trie looks something like this:
even in spite of the backtracking required for wildcards, the trie ends up being a more performant solution due to its logarithmic complexity and tendency to fit cpu cache lines. most tries have hot paths, particularly closer to the root, so caching becomes indispensable. the trie approach is also vastly easier to implement.
in almost all cases, this subscription trie needs to be thread-safe as clients are concurrently subscribing, unsubscribing, and publishing. we could serialize access to it with a reader-writer lock. for some, this would be the end of the story, but for high-throughput systems, locking is a major bottleneck. we can do better.
breaking the lock
we considered lock-free techniques that could be applied. lock-free concurrency means that while a particular thread of execution may be blocked, all cpus are able to continue processing other work. for example, imagine a program that protects access to some resource using a mutex. if a thread acquires this mutex and is subsequently preempted, no other thread can proceed until this thread is rescheduled by the os. if the scheduler is adversarial, it may never resume execution of the thread, and the program would be effectively deadlocked. a key point, however, is that the mere lack of a lock does not guarantee a program is lock-free. in this context, “lock” really refers to deadlock, livelock, or the misdeeds of a malevolent scheduler.
in practice, what lock-free concurrency buys us is increased system throughput at the expense of increased tail latencies. looking at a transactional system, lock-freedom allows us to process many concurrent transactions, any of which may block while guaranteeing systemwide progress. depending on the access patterns, when a transaction does block, there are always other transactions which can be processed—a cpu never idles. for high-throughput databases, this is essential.
concurrent timelines and linearizability
lock-freedom can be achieved using a number of techniques, but it ultimately reduces to a small handful of fundamental patterns. in order to fully comprehend these patterns, it’s important to grasp the concept of linearizability.
it takes approximately 100 nanoseconds for data to move from the cpu to main memory. this means that the laws of physics govern the unavoidable discrepancy between when you perceive an operation to have occurred and when it actually occurred. there is the time from when an operation is invoked to when some state change physically occurs (call it t inv ), and there is the time from when that state change occurs to when we actually observe the operation as completed (call it t com ). operations are not instantaneous, which means the wall-clock history of operations is uncertain. t inv and t com vary for every operation. this is more easily visualized using a timeline diagram like the one below:
this timeline shows several reads and writes happening concurrently on some state. physical time moves from left to right. this illustrates that even if a write is invoked before another concurrent write in real time, the later write could be applied first. if there are multiple threads performing operations on shared state, the notion of physical time is meaningless.
we use a linearizable consistency model to allow some semblance of a timeline by providing a total order of all state updates. linearizability requires that each operation appears to occur atomically at some point between its invocation and completion. this point is called the linearization point. when an operation completes, it’s guaranteed to be observable by all threads because, by definition, the operation occurred before its completion time. after this point, reads will only see this value or a later one—never anything before. this gives us a proper sequencing of operations which can be reasoned about. linearizability is a correctness condition for concurrent objects.
of course, linearizability comes at a cost. this is why most memory models aren’t linearizable by default. going back to our subscription trie, we could make operations on it appear atomic by applying a global lock. this kills throughput, but it ensures linearization.
in reality, the trie operations do not occur at a specific instant in time as the illustration above depicts. however, mutual exclusion gives it the appearance and, as a result, linearizability holds at the expense of systemwide progress. acquiring and releasing the lock appear instantaneous in the timeline because they are backed by atomic hardware operations like test-and-set. linearizability is a composable property, meaning if an object is composed of linearizable objects, it is also linearizable. this allows us to construct abstractions from linearizable hardware instructions to data structures, all the way up to linearizable distributed systems.
read-modify-write and cas
locks are expensive, not just due to contention but because they completely preclude parallelism. as we saw, if a thread which acquires a lock is preempted, any other threads waiting for the lock will continue to block.
read-modify-write operations like compare-and-swap offer a lock-free approach to ensuring linearizable consistency. such techniques loosen the bottleneck by guaranteeing systemwide throughput even if one or more threads are blocked. the typical pattern is to perform some speculative work then attempt to publish the changes with a cas. if the cas fails, then another thread performed a concurrent operation, and the transaction needs to be retried. if it succeeds, the operation was committed and is now visible, preserving linearizability. the cas loop is a pattern used in many lock-free data structures and proves to be a useful primitive for our subscription trie.
cas is susceptible to the aba problem. these operations work by comparing values at a memory address. if the value is the same, it’s assumed that nothing has changed. however, this can be problematic if another thread modifies the shared memory and changes it back before the first thread resumes execution. the aba problem is represented by the following sequence of events:
- thread t1 reads shared-memory value a
- t1 is preempted, and t2 is scheduled
- t2 changes a to b then back to a
- t2 is preempted, and t1 is scheduled
- t1 sees the shared-memory value is a and continues
in this situation, t1 assumes nothing has changed when, in fact, an invariant may have been violated. we’ll see how this problem is addressed later.
at this point, we’ve explored the subscription-matching problem space, demonstrated why it’s an area of high contention, and examined why locks pose a serious problem to throughput. linearizability provides an important foundation of understanding for lock-freedom, and we’ve looked at the most fundamental pattern for building lock-free data structures, compare-and-swap. next, we will take a deep dive on applying lock-free techniques in practice by building on this knowledge. we’ll continue our narrative of how we applied these same techniques to our subscription engine and provide some further motivation for them.
let’s revisit our subscription trie from earlier. our naive approach to making it linearizable was to protect it with a lock. this proved easy, but as we observed, severely limited throughput. for a message broker, access to this trie is a critical path, and we usually have multiple threads performing inserts, removals, and lookups on it concurrently. intuition tells us we can implement these operations without coarse-grained locking by relying on a cas to perform mutations on the trie.
if we recall, read-modify-write is typically applied by copying a shared variable to a local variable, performing some speculative work on it, and attempting to publish the changes with a cas. when inserting into the trie, our speculative work is creating an updated copy of a node. we commit the new node by updating the parent’s reference with a cas. for example, if we want to add a subscriber to a node, we would copy the node, add the new subscriber, and cas the pointer to it in the parent.
this approach is broken, however. to see why, imagine if a thread inserts a subscription on a node while another thread concurrently inserts a subscription as a child of that node. the second insert could be lost due to the sequencing of the reference updates. the diagram below illustrates this problem. dotted lines represent a reference updated with a cas.
the orphaned nodes containing “x” and “z” mean the subscription to “foo.bar” was lost. the trie is in an inconsistent state.
we looked to existing research in the field of non-blocking data structures to help illuminate a path. “ concurrent tries with efficient non-blocking snapshots ” by prokopec et al. introduces the ctrie, a non-blocking, concurrent hash trie based on shared-memory, single-word cas instructions.
a hash array mapped trie (hamt) is an implementation of an associative array which, unlike a hashmap, is dynamically allocated. memory consumption is always proportional to the number of keys in the trie. a hamt works by hashing keys and using the resulting bits in the hash code to determine which branches to follow down the trie. each node contains a table with a fixed number of branch slots. typically, the number of branch slots is 32. on a 64-bit machine, this would mean it takes 256 bytes (32 branches x 8-byte pointers) to store the branch table of a node.
the size of l1-l3 cache lines is 64 bytes on most modern processors. we can’t fit the branch table in a cpu cache line, let alone the entire node. instead of allocating space for all branches, we use a bitmap to indicate the presence of a branch at a particular slot. this reduces the size of an empty node from roughly 264 bytes to 12 bytes. we can safely fit a node with up to six branches in a single cache line.
the ctrie is a concurrent, lock-free version of the hamt which ensures progress and linearizability. it solves the cas problem described above by introducing indirection nodes, or i-nodes, which remain present in the trie even as nodes above and below change. this invariant ensures correctness on inserts by applying the cas operation on the i-node instead of the internal node array.
an i-node may point to a ctrie node, or c-node, which is an internal node containing a bitmap and array of references to branches. a branch is either an i-node or a singleton node (s-node) containing a key-value pair. the s-node is a leaf in the ctrie. a newly initialized ctrie starts with a root pointer to an i-node which points to an empty c-node. the diagram below illustrates a sequence of inserts on a ctrie.
an insert starts by atomically reading the i-node’s reference. next, we copy the c-node and add the new key, recursively insert on an i-node, or extend the ctrie with a new i-node. the new c-node is then published by performing a cas on the parent i-node. a failed cas indicates another thread has mutated the i-node. we re-linearize by atomically reading the i-node’s reference again, which gives us the current state of the ctrie according to its linearizable history. we then retry the operation until the cas succeeds. in this case, the linearization point is a successful cas. the following figure shows why the presence of i-nodes ensures consistency.
in the above diagram, ( k 4 , v 4 ) is inserted into a ctrie containing ( k 1 , v 1 ), ( k 2 , v 2 ), and ( k 3 , v 3 ). the new key-value pair is added to node c 1 by creating a copy, c 1 ‘ , with the new entry. a cas is then performed on the pointer at i 1 , indicated by the dotted line. since c 1 ‘ continues pointing to i 2 , any concurrent updates which occur below it will remain present in the trie. c 1 is then garbage collected once no more threads are accessing it. because of this, ctries are much easier to implement in a garbage-collected language. it turns out that this deferred reclamation also solves the aba problem described earlier by ensuring memory addresses are recycled only when it’s safe to do so.
the i-node invariant is enough to guarantee correctness for inserts and lookups, but removals require some additional invariants in order to avoid update loss. insertions extend the ctrie with additional levels, while removals eliminate the need for some of these levels. this is because we want to keep the ctrie as compact as possible while still remaining correct. for example, a remove operation could result in a c-node with a single s-node below it. this state is valid, but the ctrie could be made more compact and lookups on the lone s-node more efficient if it were moved up into the c-node above. this would allow the i-node and c-node to be removed.
the problem with this approach is it will cause insertions to be lost. if we move the s-node up and replace the dangling i-node reference with it, another thread could perform a concurrent insert on that i-node just before the compression occurs. the insert would be lost because the pointer to the i-node would be removed.
this issue is solved by introducing a new type of node called the tomb node (t-node) and an associated invariant. the t-node is used to ensure proper ordering during removals. the invariant is as follows: if an i-node points to a t-node at some time t 0 , then for all times greater than t 0 , the i-node points to the same t-node. more concisely, a t-node is the last value assigned to an i-node. this ensures that no insertions occur at an i-node if it is being compressed. we call such an i-node a tombed i-node.
if a removal results in a non-root-level c-node with a single s-node below it, the c-node is replaced with a t-node wrapping the s-node. this guarantees that every i-node except the root points to a c-node with at least one branch. this diagram depicts the result of removing ( k 2 , v 2 ) from a ctrie:
removing ( k 2 , v 2 ) results in a c-node with a single branch, so it’s subsequently replaced with a t-node. the t-node provides a sequencing mechanism by effectively acting as a marker. while it solves the problem of lost updates, it doesn’t give us a compacted trie. if two keys have long matching hash code prefixes, removing one of the keys would result in a long chain of c-nodes followed by a single t-node at the end.
an invariant was introduced which says once an i-node points to a t-node, it will always point to that t-node. this means we can’t change a tombed i-node’s pointer, so instead we replace the i-node with its resurrection . the resurrection of a tombed i-node is the s-node wrapped in its t-node. when a t-node is produced during a removal, we ensure that it’s still reachable, and if it is, resurrect its tombed i-node in the c-node above. if it’s not reachable, another thread has already performed the compression. to ensure lock-freedom, all operations which read a t-node must help compress it instead of waiting for the removing thread to complete. compression on the ctrie from the previous diagram is illustrated below.
the resurrection of the tombed i-node ensures the ctrie is optimally compressed for arbitrarily long chains while maintaining integrity.
with a 32-bit hash code space, collisions are rare but still nontrivial. to deal with this, we introduce one final node, the list node (l-node). an l-node is essentially a persistent linked list. if there is a collision between the hash codes of two different keys, they are placed in an l-node. this is analogous to a hash table using separate chaining to resolve collisions.
one interesting property of the ctrie is support for lock-free, linearizable, constant-time snapshots. most concurrent data structures do not support snapshots, instead opting for locks or requiring a quiescent state. this allows ctries to have o(1) iterator creation, clear, and size retrieval (amortized).
constant-time snapshots are implemented by writing the ctrie as a persistent data structure and assigning a generation count to each i-node. a persistent hash trie is updated by rewriting the path from the root of the trie down to the leaf the key belongs to while leaving the rest of the trie intact. the generation demarcates ctrie snapshots. to create a new snapshot, we copy the root i-node and assign it a new generation. when an operation detects that an i-node’s generation is older than the root’s generation, it copies the i-node to the new generation and updates the parent. the path from the root to some node is only updated the first time it’s accessed, making the snapshot a o(1) operation.
the final piece needed for snapshots is a special type of cas operation. there is a race condition between the thread creating a snapshot and the threads which have already read the root i-node with the previous generation. the linearization point for an insert is a successful cas on an i-node, but we need to ensure that both the i-node has not been modified and its generation matches that of the root. this could be accomplished with a double compare-and-swap, but most architectures do not support such an operation.
the alternative is to use a rdcss double-compare-single-swap originally described by harris et al. we implement an operation with similar semantics to rdcss called gcas, or generation compare-and-swap. the gcas allows us to atomically compare both the i-node pointer and its generation to the expected values before committing an update.
after researching the ctrie, we wrote a go implementation in order to gain a deeper understanding of the applied techniques. these same ideas would hopefully be adaptable to our problem domain.
generalizing the ctrie
the subscription trie shares some similarities to the hash array mapped trie but there are some key differences. first, values are not strictly stored at the leaves but can be on internal nodes as well. second, the decomposed topic is used to determine how the trie is descended rather than a hash code. wildcards complicate lookups further by requiring backtracking. lastly, the number of branches on a node is not a fixed size. applying the ctrie techniques to the subscription trie, we end up with something like this:
much of the same logic applies. the main distinctions are the branch traversal based on topic words and rules around wildcards. each branch is associated with a word and set of subscribers and may or may not point to an i-node. the i-nodes still ensure correctness on inserts. the behavior of t-nodes changes slightly. with the ctrie, a t-node is created from a c-node with a single branch and then compressed. with the subscription trie, we don’t introduce a t-node until all branches are removed. a branch is pruned if it has no subscribers and points to nowhere or it has no subscribers and points to a tombed i-node. the gcas and snapshotting remain unchanged.
we implemented this ctrie derivative in order to build our concurrent pattern-matching engine, matchbox . this library provides an exceptionally simple api which allows a client to subscribe to a topic, unsubscribe from a topic, and lookup a topic’s subscribers. snapshotting is also leveraged to retrieve the global subscription tree and the topics to which clients are currently subscribed. these are useful to see who currently has subscriptions and for what .
matchbox has been pretty extensively benchmarked, but to see how it behaves, it’s critical to observe its performance under contention. many messaging systems opt for a mutex which tends to result in a lot of lock contention. it’s important to know what the access patterns look like in practice, but for our purposes, it’s heavily parallel. we don’t want to waste cpu cycles if we can help it.
to see how matchbox compares to lock-based subscription structures, i benchmarked it against gnatsd , a popular high-performance messaging system also written in go. gnatsd uses a tree-like structure protected by a mutex to manage subscriptions and offers similar wildcard semantics.
the benchmarks consist of one or more insertion goroutines and one or more lookup goroutines. each insertion goroutine inserts 1000 subscriptions, and each lookup goroutine looks up 1000 subscriptions. we scale these goroutines up to see how the systems behave under contention.
the first benchmark is a 1:1 concurrent insert-to-lookup workload. a lookup corresponds to a message being published and matched to interested subscribers, while an insert occurs when a client subscribes to a topic. in practice, lookups are much more frequent than inserts, so the second benchmark is a 1:3 concurrent insert-to-lookup workload to help simulate this. the timings correspond to the complete insert and lookup workload. gomaxprocs was set to 8, which controls the number of operating system threads that can execute simultaneously. the benchmarks were run on a machine with a 2.6 ghz intel core i7 processor.
it’s quite clear that the lock-free approach scales a lot better under contention. this follows our intuition because lock-freedom allows system-wide progress even when a thread is blocked. if one goroutine is blocked on an insert or lookup operation, other operations may proceed. with a mutex, this isn’t possible.
matchbox performs well, particularly in multithreaded environments, but there are still more optimizations to be made. this includes improvements both in memory consumption and runtime performance. applying the ctrie techniques to this type of trie results in a fairly non-compact structure. there may be ways to roll up branches—either eagerly or after removals—and expand them lazily as necessary. other optimizations might include placing a cache or bloom filter in front of the trie to avoid descending it. the main difficulty with these will be managing support for wildcards.
to summarize, we’ve seen why subscription matching is often a major concern for message-oriented middleware and why it’s frequently a bottleneck. concurrency is crucial for high-performance systems, and we’ve looked at how we can achieve concurrency without relying on locks while framing it within the context of linearizability. compare-and-swap is a fundamental tool used to implement lock-free data structures, but it’s important to be conscious of the pitfalls. we introduce invariants to protect data consistency. the ctrie is a great example of how to do this and was foundational in our lock-free subscription-matching implementation. finally, we validated our work by showing that lock-free data structures scale dramatically better with multithreaded workloads under contention.
my thanks to steven osborne and dustin hiatt for reviewing this article.
Published at DZone with permission of Tyler Treat, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.