JVM C1, C2 Compiler Thread: High CPU Consumption?
In this post, let’s learn a little more about C1 and C2 compiler threads and how to address their high CPU consumption.
Join the DZone community and get the full member experience.Join For Free
C1 and C2 compiler threads are created by a Java virtual machine to optimize your application’s performance. Occasionally these threads will tend to consume high CPU. In this post, let’s learn a little more about C1 and C2 compiler threads and how to address their high CPU consumption.
After reading this post, terminologies like Hotspot JIT, C1 compiler threads, C2 compiler threads, and code cache may not terrify you (as it used to terrify me in the past).
What Is HotSpot JIT Compiler?
Your application may have millions of lines of code. However, only a small subset of code gets executed again and again. This small subset of code (also known as "HotSpot") is responsible for your application performance. At runtime, JVM uses this JIT (just-in-time) compiler to optimize this HotSpot code. Most of the time, code written by the application developers is not optimal. Thus, JVM’s JIT compiler optimizes the developer’s code for better performance. To do this optimization, the JIT compiler uses C1 and C2 compiler threads.
What Is Code Cache?
The memory area that the JIT compiler uses for this code compilation is called "code cache." This area resides outside of the JVM heap and metaspace. To learn about different JVM memory regions, you may refer to this video clip.
What Is the Difference Between C1 and C2 Compiler Threads?
During the early days of Java, there were two types of JIT compilers:
Based on what type of JIT compiler you want to use, appropriate JDKs have to be downloaded and installed. If you are building a desktop application, then JDK with a "client" JIT compiler needs to be downloaded. If you are building a server application, then JDK that has a "server" JIT compiler needs to be downloaded.
Client JIT compiler starts compiling the code as soon as the application starts. Server JIT compiler will observe the code execution for quite some time. Based on the execution knowledge it gains, it will start doing the JIT compilation. Even though server JIT compilation is slow, the code it produces will be far more superior and performant than the one produced by the client JIT compiler.
Today modern JDKs are shipped with both client and server JIT compilers. Both of the compilers try to optimize the application code. During the application startup time, code is compiled using the client JIT compiler. Later as more knowledge is gained, code is compiled using the server JIT compiler. This is called tiered compilation in JVM.
JDK developers were calling them client and server JIT compilers, internally as C1 and C2 compilers. Thus, the threads used by the client JIT compiler are called C1 compiler threads. Threads used by the server JIT compiler are called C2 compiler threads.
C1, C2 Compiler Threads Default Size
The default number of C1 and C2 compiler threads are determined based on the number of CPUs that are available on the container/device in which your application is running. Here is the table which summarizes the default number of C1 and C2 compiler threads:
Default C1, C2 compiler thread count
You can change the compiler thread count by passing
-XX:CICompilerCount=N JVM argument to your application. One-third of the count you specify in
-XX:CICompilerCount will be allocated to the C1 compiler threads. The remaining thread count will be allocated to C2 compiler threads. Suppose you are going to 6 threads (i.e.,
-XX:CICompilerCount=6), then 2 threads will be allocated to C1 compiler threads and 4 threads will be allocated to C2 compiler threads.
C1, C2 Compiler Thread High CPU Consumption: Potential Solutions
Sometimes you might see C1 or C2 compiler threads consume a high amount of CPU. When this type of problem surfaces, below are the potential solution to address it:
1. Do Nothing (If Intermittent)
In your case, if your C2 compiler thread’s CPU consumption is only intermittently high and not continuously high, and it doesn’t hurt your application’s performance, then you can consider ignoring the problem.
-XX:-TieredCompilation JVM argument to your application. This argument will disable the JIT HotSpot compilation. Thus CPU consumption will go down. However, as a side-effect, your application’s performance can degrade.
If a CPU spike is caused because of C2 compiler threads alone, you can turn off C2 compilation alone. You can pass
-XX:TieredStopAtLevel=3. When you pass this
-XX:TieredStopAtLevel argument with value 3, then only C1 compilation will be enabled and C2 compilation will be disabled.
There are four tiers of compilations:
When you say
-XX:TieredStopAtLevel=3, then code will be compiled only up to "Full C1 compiled code" level. C2 compilation will be stopped.
You can pass
-XX:+PrintCompilation JVM argument to your application. It will print details about your application’s compilation process. It will facilitate you to tune the compilation process further.
The code that the Hotspot JIT compiler compiles/optimizes is stored in the code cache area of the JVM memory. The default size of this code cache area is 240MB. You can increase it by passing
-XX:ReservedCodeCacheSize=N to your application. Let's say you want to make it 512 MB. You can specify it like this:
-XX:ReservedCodeCacheSize=512m. Increasing the code cache size has the potential to reduce the CPU consumption of the compiler threads.
You can consider increasing the C2 compiler threads by using the argument
-XX:CICompilerCount. You can capture the thread dump and upload it to tools like fastThread.There you can see the number of C2 compiler threads. If you see a fewer number of C2 compiler threads and you have more CPU processors/cores, you can increase the C2 compiler thread count by specifying
Published at DZone with permission of Ram Lakshmanan, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.