JavaOne: Brian Goetz on concurrency in Java 7
Join the DZone community and get the full member experience.Join For Free
brian's talk focused pretty much exclusively on the new fork-join framework that will be added as part of the
. there are a few other little goodies in the jsr update but this is the big one.
he started with an overview of how times have changed since the initial jsr 166 release in java 5. at that time, the focus was on providing tools and utilities to help write concurrent programs on boxes with a small number of cores as that was becoming prevalent.
brian showed a graph from herb sutter's article "the free lunch is over" of the clock speed and transistor count of intel chips over time.
as you can see, cpu speeds stopped increasing around 2003 although transistor counts per chip continue to follow the trend. clearly the trend is for an increasing number of cores, not increasing single core speed. the tools we have in java 5 are good enough to find coarse-grained parallelism (usually at the unit of a user request) and spread it over a small number of cores (2-8). however, these tools do not scale up to many-core boxes, which will become increasingly prevalent. the shared queues and other infrastructure used by executors and thread pools becomes a point of contention and reduce scalability.
the fork-join framework is designed to address exactly the kind of fine-grained parallelism that will be needed to keep all your cores cranking away on cpu-intensive tasks. if you aren't doing that, you're wasting cycles. fork-join is a divide-and-conquer style framework that is easy to execute and provides for a high degree of fine-grained parallelism.
the forkjoinexecutor allows you to submit a task for processing. each task is broken (recursively) into smaller pieces until some minimum threshold is reached at which point processing occurs. each task must know how to break itself up.
the act of doing the task splitting is actually fairly boilerplate for many common cases, so they created a framework for this that looks like a functional api. you start with some kind of parallel*array object and then can apply filters, mappings, aggregation, etc to it. under the hood everything is done with fork-join. if we get closures in java 7, then that dramatically simplifies the api.
fork-join is actually implemented using an idea called "work-stealing". basically, every thread has its own dequeue (double-ended queue, pronounced "deck") and only that thread reads from the head of the queue. if a thread runs out of work, it steals work from the tail of someone else's queue. because the initial biggest jobs are placed at the tail of each queue, workers steal the biggest task available, which keeps them busy for longer. this further reduces queue contention and also provides built-in load balancing.
brian also showed some performance calculations on varying #s of cores and varying sequential thresholds. in particular, he showed very good speed up with a sweet spot threshold (15x improvement on 32 cores) but also showed that even if you guess really wrong, you still get ok results.
i used fork-join back when i did my mandelbrot presentation last fall and thought it was pretty cool. i was not aware at the time of the parallel stuff as the docs were really hard to understand or missing. i'm looking forward to seeing brian's slides once they're released to see how i could have better written the mandelbrot program.
it's nice to see that this library doesn't depend on java 7 either - you can get it and use it now, so we don't have to wait till java 7, whatever decade that arrives in.
Opinions expressed by DZone contributors are their own.