How CAS (Compare And Swap) in Java works
Join the DZone community and get the full member experience.
Join For FreeBefore we dig into CAS (Compare And Swap) strategy and how is it used by atomic constructs like AtomicInteger, first consider this code:
public class MyApp { private volatile int count = 0; public void upateVisitors() { ++count; //increment the visitors count } }
This sample code is tracking the count of visitors to the application. Is there anything wrong with this code? What will happen if multiple threads try to update count? Actually the problem is simply marking count as volatile does not guarantee atomicity and ++count is not an atomic operations. To read more check this.
Can we solve this problem if we mark the method itself synchronized as shown below:
public class MyApp { private int count = 0; public synchronized void upateVisitors() { ++count; //increment the visitors count } }
Will this work? If yes then what changes have we made actually?
Does this code guarantee atomicity? Yes.
Does this code guarantee visibility? Yes.
Then what is the problem?
It makes use of locking and that introduces lot of delay and overhead. Check this article. This is very expensive way of making things work.
To overcome these problems atomic constructs were introduced. If we make use of an AtomicInteger to track the count it will work.
public class MyApp { private AtomicInteger count = new AtomicInteger(0); public void upateVisitors() { count.incrementAndGet(); //increment the visitors count } }
The classes that support atomic operations e.g. AtomicInteger, AtomicLong etc. makes use of CAS. CAS does not make use of locking rather it is very optimistic in nature. It follows these steps:
- Compare the value of the primitive to the value we have got in hand.
- If the values do not match it means some thread in between has changed the value. Else it will go ahead and swap the value with new value.
Check the following code in AtomicLong class:
public final long incrementAndGet() { for (;;) { long current = get(); long next = current + 1; if (compareAndSet(current, next)) return next; } }
In JDK 8 the above code has been changed to a single intrinsic:
public final long incrementAndGet() { return unsafe.getAndAddLong(this, valueOffset, 1L) + 1L; }
What advantage this single intrinsic have?
Actually this single line is JVM intrinsic which is translated by JIT into an optimized instruction sequence. In case of x86 architecture it is just a single CPU instruction LOCK XADD which might yield better performance than classic load CAS loop.
Now think about the possibility when we have high contention and a number of threads want to update the same atomic variable. In that case there is a possibility that locking will outperform the atomic variables but in realistic contention levels atomic variables outperform lock. There is one more construct introduced in Java 8, LongAdder. As per the documentation:
This class is usually preferable to AtomicLong when multiple threads update a common sum that is used for purposes such as collecting statistics, not for fine-grained synchronization control. Under low update contention, the two classes have similar characteristics. But under high contention, expected throughput of this class is significantly higher, at the expense of higher space consumption.
So LongAdder is not always a replacement for AtomicLong. We need to consider the following aspects:
- When no contention is present AtomicLong performs better.
- LongAdder will allocate Cells (a final class declared in abstract class Striped64) to avoid contention which consumes memory. So in case we have a tight memory budget we should prefer AtomicLong.
That's all folks. Hope you enjoyed it.
Opinions expressed by DZone contributors are their own.
Comments