Async Programming Java: Part II
We will look deep at how the executor service works internally and various factory methods provided by the Executor framework.
Join the DZone community and get the full member experience.
Join For FreeThis is part II of my previous article Async Programming in Java: Part I. We have seen the various ways to create threads and run them. Java Executor framework helps us to create, run and manage the threads.
We will look deep at how the executor service works internally and various factory methods provided by the Executor framework.
Executors
One of the most important Classes of the Java concurrency package is java.util.concurrent.Executors
. This contains Factory and Utility methods for the ExecutorService. Let's start with how executor service manages and assigns the tasks to the threads and later will discuss the factory methods of Executors
.
ExecutorService
In simple words, the Executor service maintains a pool of threads and assigns tasks to them. There three main components in the ExecutorService.
- ThreadFactory
- Blocking Queue
- ThreadPoolExecutor
ThreadFactory
This class creates and provides the threads on demand. Thus removes boilerplate code like new Thread(). Executors
by default provides a default thread factory.
Yet we have options like CustomizableThreadFactory
in spring, which adds a prefix text to the created thread's name. When we work with multiple threads and multiple use cases, CustomizableThreadFactory
will be extensively helpful on debugging. (Also, this is not only the way to add the prefix to the thread's name.)
Blocking Queue
BlockingQueue is a thread-safe queue and part of java.util.concurrent
package. It supports all the Queue features and supports operations that wait for the queue to become non-empty when retrieving an element and wait for space to become available in the queue when storing an element.
The Blocking queue is designed with a producer-consumer pattern in mind. It also supports the Collections interface.
Here program code will add the task to the queue (Producer), and Executor will take and execute the tasks (Consumer).
ThreadPoolExecutor
This is the core part of ExecutorService. ThreadPoolExecutor is the implementation of ExecutorService. It takes ThreadFactory (or DefaultThreadFactory of Executors), Blocking queue, and a few other parameters.
See the image above. Let me break it into three steps:
Step 0 (Initialization): Depends on the Type of ThreadPoolExecutor and ThreadPoolParameters. It gets the threads from the thread factory and keeps a pool of threads.
Step 1 (Submission): When the tasks (Runnable or Callable) are submitted to the ExecutorService, internally, they are added to the Blocking the queue.
Step 2 (Execution): The ThreadPoolExecutor watches the BlockingQueue; if tasks available in the Queue, it takes the task and assigns it to a thread in the thread pool. And the thread starts the execution. There might be a few corner cases.
- Wait for Task: When there is no task available in Queue.
- Wait for Thread: When there is no thread currently available to be assigned.
- Wait for Space: This happens with size bounded Queue when it is full. This is rare; by default, the Executors will supply a LinkedBlockingQueue since Executors only worry about task insertion and removal. In such a case
RejectedExecutionHandler
will be useful.
ThreadPool Parameters
You might have seen me saying a few other thread pool parameters. Let's discuss what they are and how do they control the threads in the thread pool:
corePoolSize
— the number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is setmaximumPoolSize
— the maximum number of threads to allow in the poolkeepAliveTime
— when the number of threads is greater than the core, this is the maximum time that excess idle threads will wait for new tasks before terminating.unit
— the time unit for the keepAliveTime argument
IllegalArgumentException
will be thrown on below conditions:
- corePoolSize < 0
- keepAliveTime < 0
- maximumPoolSize <= 0
- maximumPoolSize < corePoolSize
Factory Methods of Executors
We are always free to implement the ExecutorService or ThreadPoolExecutor, but Executors provide few factory methods to create the instance of ExecutorService.
Executors.newFixedThreadPool(int nThreads)
Creates a thread pool that reuses a fixed number of threads. At any point, at most nThreads threads will be active processing tasks.
If additional tasks are submitted when all threads are active, they will wait in the queue until a thread is available.
Note: As I have explained in the previous article, each thread is associated with the Underlying OS thread and also on the number of CPU Cores. As a best practice, it's recommended to use all available CPU cores for better performance. This can be found using Runtime.getRuntime().availableProcessors()
.
Executors.newCachedThreadPool()
Creates a thread pool that creates new threads as needed but will reuse previously constructed threads when they are available.
These pools will typically improve the performance of programs that execute many short-lived asynchronous tasks.
This thread pool should be used with care. This is not suitable for long-running tasks.
By default corePoolSize
is set to 0 and the maximumPoolSize
is set to Integer.MAX_VALUE
.
Executors.newScheduledThreadPool(int corePoolSize)
This is very similar to CachedThreadPool but can schedule commands to run after a given delay or to execute periodically. This returns ScheduledExecutorService, an extended version of ThreadPoolExecutor.
The corePoolSize is taken from the method parameter and the maximumPoolSize
is set to Integer.MAX_VALUE
.
Single Thread Executors
Executors
also provides the Single thread version of all above, which will hold only one thread in the ThreadPool, and this is mostly used for unit testing.
And one more interesting factory method is:
Executors.newWorkStealingPool
This is introduced in java 8 to achieve maximum parallelism by leveraging the work-stealing algorithm. This returns ForkJoinPool, and they work slightly differently from the ThreadPoolExecutor. We will see about it in upcoming articles.
To give you some context on the above, each task submitted is divided into sub-tasks on a condition, then executed, and grouped together producing the final result.
I think we had a long walk today; let's pause here. The next article will look at how to submit the tasks to ExecutorService and about their Future<T>.
Opinions expressed by DZone contributors are their own.
Comments