DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • All You Need To Know About Garbage Collection in Java
  • AOT Compilation Make Java More Power
  • Difference Between InitialRAMPercentage, MinRAMPercentage, MaxRAMPercentage
  • Heap Memory In Java Applications Performance Testing

Trending

  • Scaling Mobile App Performance: How We Cut Screen Load Time From 8s to 2s
  • Testing SingleStore's MCP Server
  • Simplify Authorization in Ruby on Rails With the Power of Pundit Gem
  • Scalability 101: How to Build, Measure, and Improve It
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Docker and Java: Why My App Is OOMKilled

Docker and Java: Why My App Is OOMKilled

See how Java 9 has adapted so that you can run JVM-based workloads in Docker containers or through Kubernetes without worrying about hitting your memory limits

By 
Sebastian Toader user avatar
Sebastian Toader
·
Jan. 18, 18 · Tutorial
Likes (29)
Comment
Save
Tweet
Share
53.4K Views

Join the DZone community and get the full member experience.

Join For Free

Those who have already run a Java application inside Docker have probably come across the problem of the JVM incorrectly detecting the available memory when running inside of the container. The JVM sees the available memory of the machine instead of the memory available only to the Docker container. This can lead to cases where applications running inside the container is killed when tries to use more memory beyond the limits of the Docker container.

The JVM incorrectly detecting the available memory has to do with the fact that the Linux tools/libs created for returning system resource information (e.g. /proc/meminfo, /proc/vmstat) were created before cgroups even existed. These will return resource information of the host (physical or virtual machine).

Let’s see this through a simple Java application that allocates a certain percentage of the available free memory running inside the Docker container. We’re going to deploy the application as a Kubernetes pod (using Minikube) to show that the same issue is present on Kubernetes as well — which is expected, as Kubernetes is using Docker as container engine.

package com.banzaicloud;

import java.util.Vector;

public class MemoryConsumer {

    private static float CAP = 0.8f;  // 80%
    private static int ONE_MB = 1024 * 1024;

    private static Vector cache = new Vector();

    public static void main(String[] args) {
        Runtime rt = Runtime.getRuntime();

        long maxMemBytes = rt.maxMemory();
        long usedMemBytes = rt.totalMemory() - rt.freeMemory();
        long freeMemBytes = rt.maxMemory() - usedMemBytes;

        int allocBytes = Math.round(freeMemBytes * CAP);

        System.out.println("Initial free memory: " + freeMemBytes/ONE_MB + "MB");
        System.out.println("Max memory: " + maxMemBytes/ONE_MB + "MB");

        System.out.println("Reserve: " + allocBytes/ONE_MB + "MB");

        for (int i = 0; i < allocBytes / ONE_MB; i++){
            cache.add(new byte[ONE_MB]);
        }

        usedMemBytes = rt.totalMemory() - rt.freeMemory();
        freeMemBytes = rt.maxMemory() - usedMemBytes;

        System.out.println("Free memory: " + freeMemBytes/ONE_MB + "MB");

    }
}

We use a Docker build file for creating a Docker image that contains the jar built from the above Java code. We need the Docker image for deploying the application as Kubernetes Pod.

Dockerfile

FROM openjdk:8-alpine

ADD memory_consumer.jar /opt/local/jars/memory_consumer.jar

CMD java $JVM_OPTS -cp /opt/local/jars/memory_consumer.jar com.banzaicloud.MemoryConsumer
docker build -t memory_consumer .

Now that we have the Docker image we need to create the pod definition for the application to deploy it to kubernetes:

memory-consumer.yaml

apiVersion: v1
kind: Pod
metadata:
  name: memory-consumer
spec:
  containers:
  - name: memory-consumer-container
    image: memory_consumer
    imagePullPolicy: Never
    resources:
      requests:
        memory: "64Mi"
      limits:
        memory: "256Mi"
  restartPolicy: Never


This pod definition ensures that the container is scheduled to a node that has at least 64MB of free memory and will not be allowed to use more than 256MB of memory.

$ kubectl create -f memory-consumer.yaml
pod "memory-consumer" created


Output of the pod:

$ kubectl logs memory-consumer
Initial free memory: 877MB
Max memory: 878MB
Reserve: 702MB
Killed

$ kubectl get po --show-all
NAME              READY     STATUS      RESTARTS   AGE
memory-consumer   0/1       OOMKilled   0          1m


The Java application running inside the container detected 877MB initial free memory and thus tried to reserve 702MB of it. Since we limited the maximum memory usage to 256MB, the container was killed.

To avoid this, we need to instruct the JVM the correct amount of memory it can operate with. We can do that via the -Xmx option. We need to modify our pod definition to pass the -Xmx setting through the JVM_OPTS env variable to the Java application running in the container.

memory-consumer.yaml

apiVersion: v1
kind: Pod
metadata:
  name: memory-consumer
spec:
  containers:
  - name: memory-consumer-container
    image: memory_consumer
    imagePullPolicy: Never
    resources:
      requests:
        memory: "64Mi"
      limits:
        memory: "256Mi"
    env:
    - name: JVM_OPTS
      value: "-Xms64M -Xmx256M"
  restartPolicy: Never
$ kubectl delete pod memory-consumer
pod "memory-consumer" deleted

$ kubectl get po --show-all
No resources found.

$ kubectl create -f memory_consumer.yaml
pod "memory-consumer" created

$ kubectl logs memory-consumer
Initial free memory: 227MB
Max memory: 228MB
Reserve: 181MB
Free memory: 50MB

$ kubectl get po --show-all
NAME              READY     STATUS      RESTARTS   AGE
memory-consumer   0/1       Completed   0          1m


This time, the application completed successfully. Also, it detected the correct available memory as we passed in -Xmx256M — thus, the application did not hit the memory limit memory: "256Mi" specified in the pod definition.

While this solution works, it requires us to specify the memory limit in two places — once as a limit for the container memory: "256Mi" and once in the option to passed to the JVM -Xmx256M. It would be nice if the JVM detected the correct max amount of memory that is available for use just based on the memory: "256Mi" setting, wouldn’t it?

Well, there was a change in Java 9 to make it Docker-aware, which has been backported to Java 8 as well.

In order to make use this feature, our pod definition will look like:

memory-consumer.yaml

apiVersion: v1
kind: Pod
metadata:
  name: memory-consumer
spec:
  containers:
  - name: memory-consumer-container
    image: memory_consumer
    imagePullPolicy: Never
    resources:
      requests:
        memory: "64Mi"
      limits:
        memory: "256Mi"
    env:
    - name: JVM_OPTS
      value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -Xms64M"
  restartPolicy: Never
$ kubectl delete pod memory-consumer
pod "memory-consumer" deleted

$ kubectl get pod --show-all
No resources found.

$ kubectl create -f memory_consumer.yaml
pod "memory-consumer" created

$ kubectl logs memory-consumer
Initial free memory: 227MB
Max memory: 228MB
Reserve: 181MB
Free memory: 54MB

$ kubectl get po --show-all
NAME              READY     STATUS      RESTARTS   AGE
memory-consumer   0/1       Completed   0          50s


Note the -XX:MaxRAMFraction=1 through which we tell the JVM how much of the available memory to used as the max heap size.

Whether you have a max heap size set either through -Xmx or dynamically with UseCGroupMemoryLimitForHeap, which takes into account the available memory limit, it helps the JVM to notice that memory usage is coming close to that limit, and it should free up space. If the max heap size is incorrect (over the available memory limit), the JVM may blindly hit the limit without trying to free up memory first, and the process will be OOMKilled.

The java.lang.OutOfMemoryError is different. That indicates that the max heap size is not enough to hold all live objects in memory. In that case, the max heap size needs to be increased using -Xmx — or the memory limit of the container needs to be increased if UseCGroupMemoryLimitForHeap is used.

Using cgroups are very useful when running for JVM-based workloads on K8s. We will follow up with an Apache Zeppelin notebook post highlighting the benefits of using this JVM configuration through an example.

Java (programming language) Docker (software) Kubernetes Memory (storage engine) application garbage collection Java virtual machine app pods

Published at DZone with permission of Sebastian Toader. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • All You Need To Know About Garbage Collection in Java
  • AOT Compilation Make Java More Power
  • Difference Between InitialRAMPercentage, MinRAMPercentage, MaxRAMPercentage
  • Heap Memory In Java Applications Performance Testing

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!