Write Combining
Join the DZone community and get the full member experience.
Join For FreeModern CPUs employ lots of techniques to counteract the latency cost of
going to main memory. These days CPUs can process hundreds of
instructions in the time it takes to read or write data to the DRAM
memory banks.
The major tool used to hide this latency is multiple layers of SRAM
cache. In addition, SMP systems employ message passing protocols to
achieve coherence between caches. Unfortunately CPUs are now so fast
that even these caches cannot keep up at times. So to further hide this
latency a number of less well known buffers are used.
This article explores “write combining store buffers” and how we can write code that uses them effectively.
CPU caches are effectively unchained hash maps where each bucket is
typically 64-bytes. This is known as a “cache line”. The cache line is
the effective unit of memory transfer. For example, an address A in
main memory would hash to map to a given cache line C.
If a CPU needs to work with an address which hashes to a line that is
not already in cache, then the existing line that matches that hash
needs to be evicted so the new line can take its place. For example if
we have two addresses which both map via the hashing algorithm to the
same cache line, then the old one must make way for the new cache line.
When a CPU executes a store operation it will try to write the data to
the L1 cache nearest to the CPU. If a cache miss occurs at this stage
the CPU goes out to the next layer of cache. At this point on an Intel,
and many other, CPUs a technique known as “write combining” comes into
play.
While the request for ownership of the L2 cache line is outstanding the
data to be stored is written to one of a number of cache line sized
store buffers on the processor itself. These on chip buffers allow the
CPU to continue processing instructions while the cache sub-system gets
ready to receive and process the data. The biggest advantage comes when
the data is not present in any of the other cache layers.
These buffers become very interesting when subsequent writes happen to
require the same cache line. The subsequent writes can be combined into
the buffer before it is committed to the L2 cache. These 64-byte
buffers maintain a 64-bit field which has the corresponding bit set for
each byte that is updated to indicate what data is valid when the buffer
is transferred to the outer caches.
Hang on I hear you say. What happens if the program wants to read some
of the data that has been written to a buffer? Well our hardware
friends have thought of that and they will snoop the buffers before they
read the caches.
What does all this mean for our programs?
If we can fill these buffers before they are transferred to the outer
caches then we will greatly improve the effective use of the transfer
bus at every level. How do we do this? Well programs spend most of
their time in loops doing work.
There are a limited number of these buffers, and they differ by CPU
model. For example on an Intel CPU you are only guaranteed to get 4 of
them at one time. What this means is that within a loop you should not
write to more than 4 distinct memory locations at one time or you will
not benefit from the write combining effect.
What does this look like in code?
import static java.lang.System.out; public final class WriteCombining { private static final int ITERATIONS = Integer.MAX_VALUE; private static final int ITEMS = 1 << 24; private static final int MASK = ITEMS - 1; private static final byte[] arrayA = new byte[ITEMS]; private static final byte[] arrayB = new byte[ITEMS]; private static final byte[] arrayC = new byte[ITEMS]; private static final byte[] arrayD = new byte[ITEMS]; private static final byte[] arrayE = new byte[ITEMS]; private static final byte[] arrayF = new byte[ITEMS]; public static void main(final String[] args) { for (int i = 1; i <= 3; i++) { out.println(i + " SingleLoop duration (ns) = " + runCaseOne()); out.println(i + " SplitLoop duration (ns) = " + runCaseTwo()); } int result = arrayA[1] + arrayB[2] + arrayC[3] + arrayD[4] + arrayE[5] + arrayF[6]; out.println("result = " + result); } public static long runCaseOne() { long start = System.nanoTime(); int i = ITERATIONS; while (--i != 0) { int slot = i & MASK; byte b = (byte)i; arrayA[slot] = b; arrayB[slot] = b; arrayC[slot] = b; arrayD[slot] = b; arrayE[slot] = b; arrayF[slot] = b; } return System.nanoTime() - start; } public static long runCaseTwo() { long start = System.nanoTime(); int i = ITERATIONS; while (--i != 0) { int slot = i & MASK; byte b = (byte)i; arrayA[slot] = b; arrayB[slot] = b; arrayC[slot] = b; } i = ITERATIONS; while (--i != 0) { int slot = i & MASK; byte b = (byte)i; arrayD[slot] = b; arrayE[slot] = b; arrayF[slot] = b; } return System.nanoTime() - start; } }
This program on my Windows 7 64-bit Intel Core i7 860 @ 2.8 GHz system produces the following output:
1 SingleLoop duration (ns) = 14019753545 1 SplitLoop duration (ns) = 8972368661 2 SingleLoop duration (ns) = 14162455066 2 SplitLoop duration (ns) = 8887610558 3 SingleLoop duration (ns) = 13800914725 3 SplitLoop duration (ns) = 7271752889
To spell it out, if we write to 6 array locations (memory addresses)
inside one loop we see that the program takes significantly longer than
if we split the work up, and write first to 3 array locations, then to
the other 3 locations sequentially.
By splitting the loop we do much more work yet the program completes in
much less time! Welcome to the magic of “write combining”. By using
our knowledge of the CPU architecture to fill those buffers properly we
can use the underlying hardware to accelerate our code by a factor of
two.
Don’t forget that with hyper-threading you can have 2 threads in competition for these buffers on the same core.
From http://mechanical-sympathy.blogspot.com/2011/07/write-combining.html
Opinions expressed by DZone contributors are their own.
Comments