Over a million developers have joined DZone.

Be Aware of ForkJoinPool#commonPool()

DZone 's Guide to

Be Aware of ForkJoinPool#commonPool()

Learn more about how to deal with thread-pools in Java.

· Java Zone ·
Free Resource

Let's focus today on the truly hidden feature in JDK. Very often, we use built-in constructs or frameworks that offer some functionality based on parallel processing. In most cases, we are allowed to specify our own thread-pool, which is going to be used during the parallel processing, but sometimes, we don't want to specify our own thread-pool and just use the default for the current library. Every library has its own approach on how to define the default thread-pools. For instance, the Spring Framework uses in a majority of cases thread-pool, which is not a thread at all, just creates a new thread per task. However, this article shows how this is handled in the JDK itself, stay tuned it's definitely not boring :)

ForkJoinPool#commonPool Introduction

Let's start with a very brief introduction and then go straight to some examples. ForkJoinPool#commonPool() is a static thread-pool, which is lazily initialized when is actually needed. Two major concepts use the commonPool inside JDK: CompletableFuture and  Parallel Streams. There is one small difference between those two features: with  CompletableFutureyou are able to specify your own thread-pool and don't use the threads from the commonPool, you cannot in case of  Parallel Streams.

Why shouldn't we use commonPool in all our cases? Don't we create an overhead when we create an additional thread-pool? Yes, we definitely do, if you want to read more about a thread overhead, please visit this article: How Much Memory Java Thread Takes. The key thing to remember in a decision process over whether to use commonPool or not is the purpose of our task, which is passed to the thread-pool. In general, there are two types of tasks: computational and blocking.

In the case of a computational task, we create a task that absolutely avoids any blocking such as I/O operation (database invocation, synchronization, thread sleep, etc...). The trick is that it does not matter on which thread your task is running, you keep your CPU busy and don't wait for any resources. Then, feel free to use commonPool to execute your work.

However, if you intend to use commonPool for blocking tasks, then you need to consider some consequences. If you have more than three available CPUs, then your commonPool is automatically sized to two threads and you can very easily block execution of any other part of your system that uses the commonPool at the same time by keeping the threads in a blocked state. As a rule of thumb, we can create our own thread-pool for blocking tasks and keep the rest of the system separated and predictable.

Go Straight to Examples

Let's move to a more interesting part of this article — hidden pitfalls regarding commonPool that have the same root cause, which is the calculation of how many threads commonPool is supposed to use. This value is automatically calculated by the JVM-based on the number of available cores. 

public class CommonPoolTest {

    public static void main(String[] args) {
        System.out.println("CPU Core: " + Runtime.getRuntime().availableProcessors());
        System.out.println("CommonPool Parallelism: " + ForkJoinPool.commonPool().getParallelism());
        System.out.println("CommonPool Common Parallelism: " + ForkJoinPool.getCommonPoolParallelism());

        long start = System.nanoTime();
        List<CompletableFuture<Void>> futures = IntStream.range(0, 100)
                .mapToObj(i -> CompletableFuture.runAsync(CommonPoolTest::blockingOperation))

        System.out.println("Processed in " + Duration.ofNanos(System.nanoTime() - start).toSeconds() + " sec");

    private static void blockingOperation() {
        try {
        } catch (InterruptedException e) {

You can notice that we have a very simple implementation of blocking calls above. 100 iterations that execute a 1-second blocking call. Let's see the results:

docker run -it --cpus 4 -v ${PWD}:/app --workdir /app adoptopenjdk/openjdk11 java CommonPoolTest.java
CPU Core: 4
CommonPool Parallelism: 3
CommonPool Common Parallelism: 3
Processed in 34 sec

We dedicated 4 CPUs for this run and finished off this program in 34 secs. We can see that the JVM automatically discovered that it's executed in a Docker container and limited the number of CPUs 4 and dedicated 3 threads for execution.

docker run -it --cpus 2 -v ${PWD}:/app --workdir /app adoptopenjdk/openjdk11 java CommonPoolTest.java
CPU Core: 2
CommonPool Parallelism: 1
CommonPool Common Parallelism: 1
Processed in 1 sec

In the second example, we used only 2 CPUs, and we can notice that the JVM automatically limited the parallelism to 1. But what?! 1 sec what actually happened under the hood?!  

There are three modes you can achieve in commonPool. 

  • parallelism > 2 — JDK creates the (# of CPUs - 1) threads for the commonPool

  • parallelism = 1 JDK creates a new thread for every submitted task

  • parallelism = 0a submitted task is executed on a caller thread

 If you want to override an ergonomic behavior of JDK, you can else specify three system properties:

  • java.util.concurrent.ForkJoinPool.common.parallelism

  • java.util.concurrent.ForkJoinPool.common.threadFactory

  • java.util.concurrent.ForkJoinPool.common.exceptionHandler 

Shoot Yourself in the Foot With commonPool

I found two examples when you can fail badly with  commonPool in your application!

Always test your application when you change resources dedicated to Container/JVM 

As you can see above, we absolutely inverse the logic behavior, we increased the number of CPUs and get a significantly worse result because of our highly blocking code. This can surprise you a lot when you have an application that, let's say, downloads tens of files using HTTP and you want to speed up maybe an absolutely different part of the program. The result will be absolutely different; you make your application slow because JDK decided to use a real thread-pool instead of a thread-per task strategy.

The Magic Called --cpu-shares (a Potential Bug)

docker run -it --cpu-shares 1023 -v ${PWD}:/app --workdir /app adoptopenjdk/openjdk11 java CommonPoolTest.java
CPU Core: 1
CommonPool Parallelism: 1
CommonPool Common Parallelism: 1
Processed in 1 sec

docker run -it --cpu-shares 1024 -v ${PWD}:/app --workdir /app adoptopenjdk/openjdk11 java CommonPoolTest.java
CPU Core: 8
CommonPool Parallelism: 7
CommonPool Common Parallelism: 7
Processed in 15 sec

docker run -it --cpu-shares 1025 -v ${PWD}:/app --workdir /app adoptopenjdk/openjdk11 java CommonPoolTest.java
CPU Core: 2
CommonPool Parallelism: 1
CommonPool Common Parallelism: 1
Processed in 1 sec

--cpu-shares 1024 option breaks the container-awareness of JVM and shows the number of cores of the host.

That's all, enjoy using  commonPool in your appand I hope you get some hints today that reduce the probability of getting some interesting/undesirable results. Thank you for reading my article and please leave comments below. If you would like to be notified about new posts, then start following me on Twitter.

java ,jvm ,jdk ,parallelism ,concurrency ,common pools ,commonpools ,forkjoinpool

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}