Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Why My Java Application Is OOMKilled

DZone's Guide to

Why My Java Application Is OOMKilled

Is your Java application detecting the wrong memory? Read this tutorial to get your application out of memory overuse and on the right track.

· Cloud Zone ·
Free Resource

Discover a centralized approach to monitor your virtual infrastructure, on-premise IT environment, and cloud infrastructure – all on a single platform.

At Banzai Cloud we run and deploy containerized applications to our PaaS, Pipeline. Like us, those who already ran Java applications inside Docker have probably came across the problem of the JVM incorrectly detecting the available memory when running inside of the container. The JVM rather sees the available memory of the machine instead of the memory available only to the Docker container. This can lead to cases where applications running inside the container is killed when tries to use more memory beyond the limits of the Docker container.

The JVM incorrectly detecting the available memory has to do with the Linux tools/libs created for returning system resource information (e.g. /proc/meminfo/proc/vmstat) were created before cgroups even existed. These will return resource information of the host (physical or virtual machine).

Let’s see this through a simple Java application that allocates a certain percentage of the available free memory running inside the Docker container. We’re going to deploy the application as a Kubernetes pod (using Minikube) to show that the same issue is present on Kubernetes as well which is expected as Kubernetes is using Docker as container engine.

We use a Docker build file for creating a Docker image that contains the jar built from the above Java code. We need the Docker image for deploying the application as Kubernetes Pod.

Dockerfile

FROM openjdk:8-alpine

ADD memory_consumer.jar /opt/local/jars/memory_consumer.jar

CMD java $JVM_OPTS -cp /opt/local/jars/memory_consumer.jar com.banzaicloud.MemoryConsumer
docker build -t memory_consumer .

Now that we have the Docker image we need to create the pod definition for the application to deploy it to Kubernetes:

memory-consumer.yaml

apiVersion: v1
kind: Pod
metadata:
  name: memory-consumer
spec:
  containers:
  - name: memory-consumer-container
    image: memory_consumer
    imagePullPolicy: Never
    resources:
      requests:
        memory: "64Mi"
      limits:
        memory: "256Mi"
  restartPolicy: Never

This pod definition ensures that the container is scheduled to a node that has at least 64MB of free memory and it will not be allowed to use more than 256MB of memory.

$ kubectl create -f memory-consumer.yaml
pod "memory-consumer" created

Output of the pod:

$ kubectl logs memory-consumer
Initial free memory: 877MB
Max memory: 878MB
Reserve: 702MB
Killed

$ kubectl get po --show-all
NAME              READY     STATUS      RESTARTS   AGE
memory-consumer   0/1       OOMKilled   0          1m

The Java application running inside the container detected 877MB initial free memory thus tried to reserve 702MB of it. Since we limited the maximum memory usage to 256MB the container was killed.

To avoid this we need to instruct the JVM the correct amount of memory it can operate with. We can do that via the -Xmx option. We need to modify our pod definition to pass -Xmx setting through the JVM_OPTSenv variable to the java application running in the container.

memory-consumer.yaml

apiVersion: v1
kind: Pod
metadata:
  name: memory-consumer
spec:
  containers:
  - name: memory-consumer-container
    image: memory_consumer
    imagePullPolicy: Never
    resources:
      requests:
        memory: "64Mi"
      limits:
        memory: "256Mi"
    env:
    - name: JVM_OPTS
      value: "-Xms64M -Xmx256M"
  restartPolicy: Never
$ kubectl delete pod memory-consumer
pod "memory-consumer" deleted

$ kubectl get po --show-all
No resources found.

$ kubectl create -f memory_consumer.yaml
pod "memory-consumer" created

$ kubectl logs memory-consumer
Initial free memory: 227MB
Max memory: 228MB
Reserve: 181MB
Free memory: 50MB

$ kubectl get po --show-all
NAME              READY     STATUS      RESTARTS   AGE
memory-consumer   0/1       Completed   0          1m

This time the application completed successfully; it also detected the correct available memory as we passed in -Xmx256M so the application did not hit the memory limit memory: "256Mi" specified in the pod definition.

While this solution works it requires one to specify the memory limit in two places once as a limit for the container memory: "256Mi" and once in the option to passed to the JVM -Xmx256M. It would be nice if the JVM detected the correct max amount of memory it’s available for use just based on the memory: "256Mi"setting, wouldn’t it?

Well, there is a change in Java 9 to make it Docker aware which has been backported to Java 8 as well.

In order to make use of this feature our pod definition will look like:

memory-consumer.yaml

apiVersion: v1
kind: Pod
metadata:
  name: memory-consumer
spec:
  containers:
  - name: memory-consumer-container
    image: memory_consumer
    imagePullPolicy: Never
    resources:
      requests:
        memory: "64Mi"
      limits:
        memory: "256Mi"
    env:
    - name: JVM_OPTS
      value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -Xms64M"
  restartPolicy: Never
$ kubectl delete pod memory-consumer
pod "memory-consumer" deleted

$ kubectl get pod --show-all
No resources found.

$ kubectl create -f memory_consumer.yaml
pod "memory-consumer" created

$ kubectl logs memory-consumer
Initial free memory: 227MB
Max memory: 228MB
Reserve: 181MB
Free memory: 54MB

$ kubectl get po --show-all
NAME              READY     STATUS      RESTARTS   AGE
memory-consumer   0/1       Completed   0          50s

Note the -XX:MaxRAMFraction=1 through which we tell the JVM how much of the available memory to used as max heap size.

Having an max heap size set either through -Xmx or dynamically with UseCGroupMemoryLimitForHeap,which takes into account the available memory limit, is important as that helps the JVM to notice that memory usage is coming close to that limit and it should free up space. If the max heap size is incorrect (over the available memory limit) the JVM may blindly hit the limit without trying to free up memory first and the process will be OOMKilled.

The java.lang.OutOfMemoryError is different. That indicates that the max heap size is not enough to hold all live objects in memory. In that case max heap size needs to be increased using -Xmx or the memory limit of the container if the UseCGroupMemoryLimitForHeap is used.

Using cgroups are very useful when running for JVM based workloads on K8s. We will follow up with an Apache Zeppelin notebook post highlighting the benefits of using this JVM configuration through an example.

Learn how to auto-discover your containers and monitor their performance, capture Docker host and container metrics to allocate host resources, and provision containers.

Topics:
docker ,java ,kubernetes ,Cloud ,OOM ,Memory

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}