{{announcement.body}}
{{announcement.title}}

Lock Striping in Java

DZone 's Guide to

Lock Striping in Java

Take a look at this implementation of lock striping in Java after examining the effects of a sequential bottleneck in concurrent requests.

· Java Zone ·
Free Resource

There are dozens of decent lock-free hashtable implementations. Usually, those data structures, instead of using plain locks, are using CAS-based operations to be lock-free. with this narrative, it might sound I’m gonna use this post to build an argument for lock-free data structures. that’s not the case, quite surprisingly. on the contrary, here we’re going to talk about plain old locks.

Now that everybody’s onboard, let’s implement a concurrent version of Map with this simple contract:

Java
 




x


 
1
public interface ConcurrentMap<K, V> {
2

3
    boolean put(K key, V value);
4
    V get(K key);
5
    boolean remove(K key);
6
}


Since this is going to be a thread-safe data structure, we should acquire some locks and subsequently release them while putting, getting, or removing elements:

Java

Also, we should initialize the implementation with a pre-defined number of buckets:

Java

One Size Fits All?

What happens when the number of stored elements are way more than the number of buckets? Even in the presence of the most versatile hash-functions, our hashtable’s performance would degrade significantly. In order to prevent this, we need to add more buckets when needed:

Java
 




xxxxxxxxxx
1


 
1
protected abstract boolean shouldResize();
2
protected abstract void resize();


With the help of these abstract template methods, here’s how we can put a new key-value pair into our map:

Java
 




xxxxxxxxxx
1
26


 
1
@Override
2
public boolean put(K key, V value) {
3
    if (key == null) return false;
4
 
           
5
    var entry = new Entry<>(key, value);
6
    acquire(entry);
7
    try {
8
        var bucketNumber = findBucket(entry);
9
        var currentPosition = table[bucketNumber].indexOf(entry);
10
        if (currentPosition == -1) {
11
            table[bucketNumber].add(entry);
12
            size++;
13
        } else {
14
            table[bucketNumber].add(currentPosition, entry);
15
        }
16
    } finally {
17
        release(entry);
18
    }
19
 
           
20
    if (shouldResize()) resize();
21
    return true;
22
}
23
 
           
24
protected int findBucket(Entry<K, V> entry) {
25
    return abs(entry.hashCode()) % table.length;
26
}


In order to find a bucket for the new element, we are relying on the hashcode of the key. Also, we should calculate the hashcode module of the number of buckets to prevent being out of bounds!

The remaining parts are pretty straightforward: we’re acquiring a lock for the new entry and only add the entry if it does not exist already. Otherwise, we would update the current entry. At the end of the operation, if we came to this conclusion that we should add more buckets, then we would do so to maintain the hashtable’s balance. The get and remove methods can also be implemented as easily.

Let’s fill the blanks by implementing all those abstract methods!

One Lock to Rule Them All

The simplest way to implement a thread-safe version of this data structure is to use just one lock for all its operations:

Java
 

The current acquire-and-release implementations are completely independent of the map entry. As promised, we’re using just one ReentrantLock for synchronization. This approach is known as Coarse-Grained Synchronization since we’re using just one lock to enforce exclusive access across the whole hashtable.

Also, we should resize the hashtable to maintain its constant access and modification time. In order to do that, we can incorporate different heuristics. For example, when the average bucket size exceeds a specific number:

Java
 




xxxxxxxxxx
1


 
1
@Override
2
protected boolean shouldResize() {
3
    return size / table.length >= 5;
4
} 


In order to add more buckets, we should copy the current hashtable and create a new table with, say, twice the size. Then add all entries from the old table to the new one:

Java
 




xxxxxxxxxx
1
22


 
1
@Override
2
protected void resize() {
3
    lock.lock();
4
    try {
5
        var oldTable = table;
6
        var doubledCapacity = oldTable.length * 2;
7
 
           
8
        table = (List<Entry<K, V>>[]) new List[doubledCapacity];
9
        for (var i = 0; i < doubledCapacity; i++) {
10
            table[i] = new ArrayList<>();
11
        }
12
 
           
13
        for (var entries : oldTable) {
14
            for (var entry : entries) {
15
                var newBucketNumber = abs(entry.hashCode()) % doubledCapacity;
16
                table[newBucketNumber].add(entry);
17
            }
18
        }
19
    } finally {
20
        lock.unlock();
21
    }
22
}


Please note that we should acquire and release the same lock for resize operation, too.

Sequential Bottleneck

Suppose there are three concurrent requests for putting something into the first bucket, getting something from the third bucket and removing something from the sixth bucket:

Concurrent Requests

Ideally, we expect from a highly scalable concurrent data structure to serve such requests with as little coordination as possible. However, this is what happens in the real world:

Lock acquisition barrier

As we’re using only one lock for synchronization, concurrent requests will be blocked behind the first one that acquired the lock. When the first one releases the lock, the second one acquires it and after some time, releases it.

This phenomenon is known as Sequential Bottleneck (Something like Head of Line Blocking) and we should mitigate this effect in concurrent environments.

The United Stripes of Locks

One way to improve our current coarse-grained implementation is to use a collection of locks instead of just one lock. This way each lock would be responsible for synchronizing one or more buckets. This is more of a Finer-Grained Synchronization compared to our current approach:

Collection of locks

Here we’re dividing the hashtable into a few parts and synchronize each part independently. This way we reduce the contention for each lock and consequently improving the throughput. This idea also is called Lock Striping because we synchronize different parts (i.e. stripes) independently:

Java
 




xxxxxxxxxx
1
14


 
1
public final class LockStripedConcurrentMap<K, V> extends AbstractConcurrentMap<K, V> {
2
 
           
3
    private final ReentrantLock[] locks;
4
 
           
5
    public LockStripedConcurrentMap() {
6
        super();
7
        locks = new ReentrantLock[INITIAL_CAPACITY];
8
        for (int i = 0; i < INITIAL_CAPACITY; i++) {
9
            locks[i] = new ReentrantLock();
10
        }
11
    }
12
 
           
13
    // omitted
14
}


We need a method to find an appropriate lock for each entry (i.e. bucket):

Java
 




xxxxxxxxxx
1


 
1
private ReentrantLock lockFor(Entry<K, V> entry) {
2
    return locks[abs(entry.hashCode()) % locks.length];
3
}


Then we can acquire and release a lock for an operation on a particular entry as following:

Java
 




xxxxxxxxxx
1


 
1
@Override
2
protected void acquire(Entry<K, V> entry) {
3
    lockFor(entry).lock();
4
}
5
 
           
6
@Override
7
protected void release(Entry<K, V> entry) {
8
    lockFor(entry).unlock();
9
}


The resize procedure is almost the same as before. The only difference is that we should acquire all locks before the resize and release all of them after the resize.

We expect the fine-grained approach to outperform the coarse-grained one, let’s measure it!

Moment of Truth

In order to benchmark these two approaches, we’re going to use the following workload on both implementations:

Java
 




xxxxxxxxxx
1
24


 
1
final int NUMBER_OF_ITERATIONS = 100;
2
final List<String> KEYS = IntStream.rangeClosed(1, 100)
3
                                   .mapToObj(i -> "key" + i).collect(toList());
4
 
           
5
void workload(ConcurrentMap<String, String> map) {
6
    var requests = new CompletableFuture<?>[NUMBER_OF_ITERATIONS * 3];
7
    for (var i = 0; i < NUMBER_OF_ITERATIONS; i++) {
8
        requests[3 * i] = CompletableFuture.supplyAsync(
9
            () -> map.put(randomKey(), "value")
10
        );
11
        requests[3 * i + 1] = CompletableFuture.supplyAsync(
12
            () -> map.get(randomKey())
13
        );
14
        requests[3 * i + 2] = CompletableFuture.supplyAsync(
15
            () -> map.remove(randomKey())
16
        );
17
    }
18
 
           
19
    CompletableFuture.allOf(requests).join();
20
}
21
 
           
22
String randomKey() {
23
    return KEYS.get(ThreadLocalRandom.current().nextInt(100));
24
}


Each time we will put a random key, get a random key, and remove a random key. Each workload would run these three operations 100 times. Adding the JMH to the mix, each workload would be executed thousands of times:

Java
 




xxxxxxxxxx
1
11


 
1
@Benchmark
2
@BenchmarkMode(Throughput)
3
public void forCoarseGrainedLocking() {
4
    workload(new CoarseGrainedConcurrentMap<>());
5
}
6
 
           
7
@Benchmark
8
@BenchmarkMode(Throughput)
9
public void forStripedLocking() {
10
    workload(new LockStripedConcurrentMap<>());
11
}


The coarse-grained implementation can handle 8,547 operations per second on average:

Java
 




xxxxxxxxxx
1


 
1
 
           
2
8547.430 ±(99.9%) 58.803 ops/s [Average]
3
(min, avg, max) = (8350.369, 8547.430, 8676.476), stdev = 104.523
4
CI (99.9%): [8488.627, 8606.234] (assumes normal distribution)


On the other hand, the fine-grained implementation can handle up to 13,855 operations per second on average.

Java
 




xxxxxxxxxx
1


 
1
13855.946 ±(99.9%) 119.109 ops/s [Average]
2
(min, avg, max) = (13524.367, 13855.946, 14319.174), stdev = 211.716
3
CI (99.9%): [13736.837, 13975.055] (assumes normal distribution)


Recursive Split-Ordered Lists

In their paper, Ori Shalev and Nir Shavit proposed a way to implement a lock-free hashtable. They used a way of ordering elements in a linked list so that they can be repeatedly “split” using a single compare-and-swap operation. Anyway, they’ve used a pretty different approach to implement this concurrent data structure. It’s highly recommended to check out and read that paper.

Also, the source codes of this article are available on Github.

Topics:
java ,locks ,concurrency ,coarse grained synchronization ,hashtable ,sequential bottleneck

Published at DZone with permission of Ali Dehghani . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}