refcard cover
Refcard #200

Java Performance Optimization

Patterns and Anti-Patterns

Getting Java apps to run is one thing. But getting them to run fast is another. Performance is a tricky beast in any object-oriented environment, but the complexity of the JVM adds a whole new level of performance-tweaking trickiness — and opportunity. This Refcard covers garbage collection monitoring and tuning, memory leaks, object caching, concurrency, and more.

Free PDF for Easy Reference
refcard cover

Written By

author avatar Anant Mishra
Software Engineer, Salesforce
Section 1


The purpose of this Refcard is to help developers avoid performance issues while creating new applications and to improve performance of their existing applications. We will cover anti-patterns in Java application development and their respective patterns to follow for optimized performance. Lastly, a list of different tools for monitoring and troubleshooting application performance is provided for you to explore and, when you’re ready, to begin applying some of the concepts covered in this Refcard in your own applications.

Please note that concepts of application performance optimization are advanced-level topics for the given language — in this case, Java. So intermediate knowledge of Java and its internal architecture are prerequisites for this Refcard. Here are resources to revisit the required concepts:

This is a preview of the Java Performance Optimization Refcard. To read the entire Refcard, please download the PDF from the link above.

Section 2

Benefits of Performance Optimization

Poor performance of applications can cause delayed responses to the end user and unoptimized use of servers and other resources. Ultimately, these issues can impact user experience and the cost of running an application.

Optimizing performance of your Java applications will improve:

  • Latency, enhancing user experience.
  • Application adoption, multiplying overall ROI.
  • Code quality, resulting in reduced downtime.
  • Server efficiency, reducing total infrastructure costs.
  • SEO, increasing search rankings.

The following sections in this Refcard cover the main anti-patterns and coding mistakes in Java applications that can result in the degradation of application performance.

Section 3

Garbage Collection Performance Tuning

The Java garbage collection process is one of the most important contributing factors for optimal application performance. In order to provide efficient garbage collection, the heap is essentially divided into two sub areas:

  • Young generation (nursery space)
  • Old generation (tenured space)

Choosing the right garbage collector for your application is a determinant factor for optimal application performance, scalability, and reliability. The GC algorithms are included in the table below:

GC Type Description
Serial Collector
  • Has the smallest footprint of any collector
  • Runs with a footprint that requires a small number of data structures
  • Uses a single thread for minor and major collections
Parallel Collector
  • Stops all app threads and executes garbage collection
  • Best suited for apps that run on multicore systems and need better throughput
Concurrent Mark-Sweep Collector
  • Has less throughput, but smaller pauses, than the parallel collector
  • Best suited for all general Java applications
Garbage-First (G1) Collector
  • Is an improvement from the CMS collector
  • Uses entire heap, divides it into multiple regions

GC Performance Considerations

When evaluating application performance, most of the time you won’t have to worry about configuring garbage collection. However, if you are seeing that your application performance degrades significantly during garbage collection, you do have a lot of control over garbage collection. Concepts like generation size, throughput, pauses, footprint, and promptness can all be tuned.

For more information on performance metric types, please refer to Oracle’s Java 8 documentation on GC tuning.

This is a preview of the Java Performance Optimization Refcard. To read the entire Refcard, please download the PDF from the link above.

Section 4

Memory Leak Issues

A memory leak occurs when there are objects present in the heap that are no longer used, but the garbage collector cannot identify that they are unused. Thus, they are unnecessarily maintained. Memory leaks can exhaust memory resources and degrade system performance over time. It is possible for the application to terminate with a fatal java.lang.OutOfMemoryError.

Overuse of Static Fields

The lifetime of a static field is the same as the lifetime of the class in which that field is present. So the garbage collector does not collect static fields unless the classloader of that class itself becomes eligible for garbage collection.

If there is a collection of objects (e.g., List) marked static — and throughout the execution of the application objects are added to that List —  the application will retain all of those objects even if they are no longer in use. Such scenarios can lead to a java.lang.OutOfMemoryError.

This is a preview of the Java Performance Optimization Refcard. To read the entire Refcard, please download the PDF from the link above.

Section 5

Java Concurrency

Java concurrency can be defined as the ability to execute several tasks of a program in parallel. For large Java EE systems, this means the capability to execute multiple user business functions concurrently while achieving optimal throughput and performance. Regardless of your hardware capacity or the health of your JVM, Java concurrency problems can bring any application to its knees and severely affect the overall application performance and availability.

Thread Lock Contention

Thread lock contention is by far the most common Java concurrency problem you will observe when assessing the concurrent threads health of your Java application. This problem manifests itself by the presence of 1...n BLOCKED threads (thread waiting chain) waiting to acquire a lock on a particular object monitor. Depending on the severity of the issue, lock contention can severely affect application response time and service availability.

This is a preview of the Java Performance Optimization Refcard. To read the entire Refcard, please download the PDF from the link above.

Section 6

Coding Practices

A few basic rules should be followed while writing Java code in order to avoid performance issues.

In-Process Caching

In Java applications, the caching of objects is done to avoid multiple database or network calls. These objects can be cached to the in-process memory (an in-process cache) of the application or to the external memory (a distributed cache). You can learn more about this in the article “In-Process Caching vs. Distributed Caching” on DZone.

An in-process cache uses the JVM memory to store objects. This is why storing large objects or many objects in such a cache can lead to memory consumption and an OutOfMemoryError.

Below are a few ways to avoid this:

  • Do not store objects whose count can't be controlled at the application’s runtime — for example, the number of active user sessions. In this scenario, an application may crash due to an OutOfMemoryError if the number of active sessions increases.
  • Avoid storing heavy objects, as these objects will consume a lot of JVM memory.
  • Store only those objects that need to be accessed frequently. Storing them in a distributed cache can create significant performance degradation due to multiple network calls.
  • Configure the cache eviction policy carefully. Unnecessary objects should not be in the cache and should be evicted.

This is a preview of the Java Performance Optimization Refcard. To read the entire Refcard, please download the PDF from the link above.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}