Docker and Java: Why My App Is OOMKilled
See how Java 9 has adapted so that you can run JVM-based workloads in Docker containers or through Kubernetes without worrying about hitting your memory limits
Join the DZone community and get the full member experience.
Join For FreeThose who have already run a Java application inside Docker have probably come across the problem of the JVM incorrectly detecting the available memory when running inside of the container. The JVM sees the available memory of the machine instead of the memory available only to the Docker container. This can lead to cases where applications running inside the container is killed when tries to use more memory beyond the limits of the Docker container.
The JVM incorrectly detecting the available memory has to do with the fact that the Linux tools/libs created for returning system resource information (e.g. /proc/meminfo
, /proc/vmstat
) were created before cgroups even existed. These will return resource information of the host (physical or virtual machine).
Let’s see this through a simple Java application that allocates a certain percentage of the available free memory running inside the Docker container. We’re going to deploy the application as a Kubernetes pod (using Minikube) to show that the same issue is present on Kubernetes as well — which is expected, as Kubernetes is using Docker as container engine.
package com.banzaicloud;
import java.util.Vector;
public class MemoryConsumer {
private static float CAP = 0.8f; // 80%
private static int ONE_MB = 1024 * 1024;
private static Vector cache = new Vector();
public static void main(String[] args) {
Runtime rt = Runtime.getRuntime();
long maxMemBytes = rt.maxMemory();
long usedMemBytes = rt.totalMemory() - rt.freeMemory();
long freeMemBytes = rt.maxMemory() - usedMemBytes;
int allocBytes = Math.round(freeMemBytes * CAP);
System.out.println("Initial free memory: " + freeMemBytes/ONE_MB + "MB");
System.out.println("Max memory: " + maxMemBytes/ONE_MB + "MB");
System.out.println("Reserve: " + allocBytes/ONE_MB + "MB");
for (int i = 0; i < allocBytes / ONE_MB; i++){
cache.add(new byte[ONE_MB]);
}
usedMemBytes = rt.totalMemory() - rt.freeMemory();
freeMemBytes = rt.maxMemory() - usedMemBytes;
System.out.println("Free memory: " + freeMemBytes/ONE_MB + "MB");
}
}
We use a Docker build file for creating a Docker image that contains the jar
built from the above Java code. We need the Docker image for deploying the application as Kubernetes Pod.
Dockerfile
FROM openjdk:8-alpine
ADD memory_consumer.jar /opt/local/jars/memory_consumer.jar
CMD java $JVM_OPTS -cp /opt/local/jars/memory_consumer.jar com.banzaicloud.MemoryConsumer
docker build -t memory_consumer .
Now that we have the Docker image we need to create the pod definition for the application to deploy it to kubernetes:
memory-consumer.yaml
apiVersion: v1
kind: Pod
metadata:
name: memory-consumer
spec:
containers:
- name: memory-consumer-container
image: memory_consumer
imagePullPolicy: Never
resources:
requests:
memory: "64Mi"
limits:
memory: "256Mi"
restartPolicy: Never
This pod definition ensures that the container is scheduled to a node that has at least 64MB of free memory and will not be allowed to use more than 256MB of memory.
$ kubectl create -f memory-consumer.yaml
pod "memory-consumer" created
Output of the pod:
$ kubectl logs memory-consumer
Initial free memory: 877MB
Max memory: 878MB
Reserve: 702MB
Killed
$ kubectl get po --show-all
NAME READY STATUS RESTARTS AGE
memory-consumer 0/1 OOMKilled 0 1m
The Java application running inside the container detected 877MB initial free memory and thus tried to reserve 702MB of it. Since we limited the maximum memory usage to 256MB
, the container was killed.
To avoid this, we need to instruct the JVM the correct amount of memory it can operate with. We can do that via the -Xmx
option. We need to modify our pod definition to pass the -Xmx
setting through the JVM_OPTS
env variable to the Java application running in the container.
memory-consumer.yaml
apiVersion: v1
kind: Pod
metadata:
name: memory-consumer
spec:
containers:
- name: memory-consumer-container
image: memory_consumer
imagePullPolicy: Never
resources:
requests:
memory: "64Mi"
limits:
memory: "256Mi"
env:
- name: JVM_OPTS
value: "-Xms64M -Xmx256M"
restartPolicy: Never
$ kubectl delete pod memory-consumer
pod "memory-consumer" deleted
$ kubectl get po --show-all
No resources found.
$ kubectl create -f memory_consumer.yaml
pod "memory-consumer" created
$ kubectl logs memory-consumer
Initial free memory: 227MB
Max memory: 228MB
Reserve: 181MB
Free memory: 50MB
$ kubectl get po --show-all
NAME READY STATUS RESTARTS AGE
memory-consumer 0/1 Completed 0 1m
This time, the application completed successfully. Also, it detected the correct available memory as we passed in -Xmx256M
— thus, the application did not hit the memory limit memory: "256Mi"
specified in the pod definition.
While this solution works, it requires us to specify the memory limit in two places — once as a limit for the container memory: "256Mi"
and once in the option to passed to the JVM -Xmx256M
. It would be nice if the JVM detected the correct max amount of memory that is available for use just based on the memory: "256Mi"
setting, wouldn’t it?
Well, there was a change in Java 9 to make it Docker-aware, which has been backported to Java 8 as well.
In order to make use this feature, our pod definition will look like:
memory-consumer.yaml
apiVersion: v1
kind: Pod
metadata:
name: memory-consumer
spec:
containers:
- name: memory-consumer-container
image: memory_consumer
imagePullPolicy: Never
resources:
requests:
memory: "64Mi"
limits:
memory: "256Mi"
env:
- name: JVM_OPTS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -Xms64M"
restartPolicy: Never
$ kubectl delete pod memory-consumer
pod "memory-consumer" deleted
$ kubectl get pod --show-all
No resources found.
$ kubectl create -f memory_consumer.yaml
pod "memory-consumer" created
$ kubectl logs memory-consumer
Initial free memory: 227MB
Max memory: 228MB
Reserve: 181MB
Free memory: 54MB
$ kubectl get po --show-all
NAME READY STATUS RESTARTS AGE
memory-consumer 0/1 Completed 0 50s
Note the -XX:MaxRAMFraction=1
through which we tell the JVM how much of the available memory to used as the max heap size.
Whether you have a max heap size set either through -Xmx
or dynamically with UseCGroupMemoryLimitForHeap
, which takes into account the available memory limit, it helps the JVM to notice that memory usage is coming close to that limit, and it should free up space. If the max heap size is incorrect (over the available memory limit), the JVM may blindly hit the limit without trying to free up memory first, and the process will be OOMKilled.
The java.lang.OutOfMemoryError
is different. That indicates that the max heap size is not enough to hold all live objects in memory. In that case, the max heap size needs to be increased using -Xmx
— or the memory limit of the container needs to be increased if UseCGroupMemoryLimitForHeap
is used.
Using cgroups
are very useful when running for JVM-based workloads on K8s. We will follow up with an Apache Zeppelin notebook post highlighting the benefits of using this JVM configuration through an example.
Published at DZone with permission of Sebastian Toader. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments