Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Java Concurrency in Depth (Part 2)

DZone's Guide to

Java Concurrency in Depth (Part 2)

As we continue our exploration of concurrency in Java, let's see what happened when synchronized isn't enough and you need to dive into locks.

· Java Zone ·
Free Resource

Java-based (JDBC) data connectivity to SaaS, NoSQL, and Big Data. Download Now.

In the first part of this series, Java Concurrency in Depth (Part 1), I discussed the internals of Java's synchronization primitives (synchronized, volatile, and atmoic classes) and the pros and cons of each of them.

In this article, I will discuss other high-level locks that are built on top of volatile, atomic classes and Compare-And-Swap.

The primary reason for writing a multi-threaded application is improving performance. In this case, we are trading performance gain with more complexity in code, debugging, and monitoring. When having a shared object, locking becomes inevitable and this shared object becomes the bottleneck where we see performance issues, among others. So, choosing the right locking mechanism can have a profound impact on performance and operations.

But why is synchronized not enough?

  1. Synchronized is an exclusive locking mechanism that cannot be tailored to different use cases.

  2. No way to instruct the JVM to use fair policy.

  3. When deadlock or starvation occur, the only way to resolve the problem is killing the process because threads block indefinitely and there is no way to interrupt a blocked thread in this case.

Lock

Having an interface like java.util.concurrent.locks.Lock provides great flexibility to have different implementations tailored to different use cases, as well as overcoming the drawbacks of synchronized. The interface defines the following methods:

  • lock(): The thread blocks indefinitely until it acquires the lock. No way to interrupt the blocked thread.

  • lockInterruptibly(): The thread blocks indefinitely until it acquires the lock. The blocked thread can be interrupted

  • tryLock(): Non-blocking call. Returns immediately with true if the lock is acquired and false otherwise.

  • tryLock(long, TimeUnit): Blocking call until the lock is acquired or the specified timeout has elapsed. Returns with true if the lock is acquired and false otherwise. It is also possible to interrupt a blocked thread.

  • unlock(): Releases the lock. It is important to ALWAYS have it in a finally block.

  • newCondition(): This is one of the most useful features. In some use cases, the thread needs to wait until a certain condition is satisfied, and during this waiting time, it is useful to release the lock for other threads to continue execution. This is similar to Object.wait and Object.notify/notifyAll, which work with synchronized.

ReentrantLock

ReentrantLock implements Lock interface. It has two constructors to choose from depending on whether you need a fair lock or a non-fair one.

It is important to know that both the fair and non-fair variants of ReentrantLock use a waiting queue to park blocked threads. This means:

  • Thread priority (can be set using Thread.setPriority()) has no effect. So, don't rely on it in when using reentrant locks.

  • Fairness here is very simple — it is about getting a chance to execute. It's nothing like complex schedulers, which means it is still possible to cause starvation if a thread doesn't release the lock or holds it for a long time.

Fair and non-fair variants use AbstractQueuedSynchronizer, which implements the waiting queue. It is a variant of the CLH (Craig, Landin, and Hagersten) lock queue.

The logic behind granting locks:

  • Fair lock: grant the lock only if the call is recursive (i.e. the thread is already holding the lock) or there are no other waiting threads, or the thread is the first in the queue.

  • Non-fair lock: tries to acquire the lock and, if it can't, then the thread is queued.

Important Usage Guidelines

  • tryLock(), unlike the rest of the methods, acts in a non-fair way with both fair and non-fair implementations. So, pay attention when using it with a fair lock, as you might be expecting different behavior.

  • When using ReentrantLock, try to avoid using lock(). Use any of the other methods and, depending on the use case, you might add an exponential backoff or maximum retries or both methods combined. Even if the use case requires blocking, for instance until data is available, use the lockInterruptibly() or tryLock(long, Timeout) methods.

boolean acquired = false;
long wait = 100;
int retries = 0;
int maxRetries = 10;
try {
    while (!acquired && retries < maxRetries) {
        acquired = lock.tryLock(wait, TimeUnit.MILLISECONDS);
        wait *= 2;
        ++retries;
    }
    if (!acquired) {
        // log error or throw exception
    }
} catch (InterruptedException e) {
    // log error or throw exception
} finally {
    lock.unlock();
}


  • One bad practice, in general, is ignoring interrupts. Interrupts should be handled properly to avoid any problems with application or thread pool termination. Even when you're sure that you need to ignore them, then log the exception. But don't just swallow it.

  • The class provides many methods that, as per the Javadoc, should be used only for debugging and instrumentation and not for synchronization purposes. My advice is keeping programming to the interface because you don't need to pollute your code with such granular debug information that you can actually get using a decent profiler.

ReentrantReadWriteLock

ReentrantReadWriteLock actually consists of two reentrant locks: a read lock and a write lock. The read lock is shared (which means multiple threads can acquire it as long as the write lock is not acquired), whereas the exclusive write lock can be acquired by one thread only as long as the read lock is not acquired by any other thread. This is very useful in many use cases and helps increase concurrency. 

This also has both fair and non-fair policies.

Other Useful Synchronization Tools

  • CountDownLatch: This is useful when one or more threads need to wait until a set of operations completes. A count is passed to the constructor and it can be decremented by calling countDown(). When the count goes to zero, waiting threads are notified.

  • CyclicBarrier: This is useful when a set of threads are required to wait until they reach a common point. The same behavior could be achieved using CoutDownLatch, but CyclicBarrier has some additional options:

    • The count in CyclicBarrier can be reset.

    • CyclicBarrier accepts an optional Runnable implementation that will execute when the barrier is tripped.

    • A barrier is considered broken if any of the threads leaves the barrier because of interruption, failure, or timeout. When this happens, all other threads waiting will leave the barrier by throwing BarrierBrokenException. The state of the barrier can be tested using isBroker(). This state is kept until reset() is called.  

  • StampedLock: Yet another implementation of a read-write lock. It doesn't implement the Lock interface, and it is more complicated to use that ReentrantReadWriteLock. It has a very interesting feature — Optimistic Reading — but it is quite tricky and fragile to use. I strongly recomment using ReentrantReadWriteLock instead of StampedLock.  

Connect any Java based application to your SaaS data.  Over 100+ Java-based data source connectors.

Topics:
java ,concurrency ,tutorial ,locking ,multi-threaded applications

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}