Ralph
Ralph

Reputation: 4868

Why is OpenJDK Docker Container ignoring Memory Limits in Kubernetes?

I am running several Java Applications with the Docker image jboss/wildfly:20.0.1.Final on Kubernetes 1.19.3. Wildfly server is running in OpenJDK 11 so the jvm is supporting container memory limits (cgroups).

If I set a memory limit this limit is totally ignored by the container when running in Kubernetes. But it is respected on the same machine when I run it in plain Docker:

1. Run Wildfly in Docker with a memory limit of 300M:

$ docker run -it --rm --name java-wildfly-test -p 8080:8080 -e JAVA_OPTS='-XX:MaxRAMPercentage=75.0' -m=300M jboss/wildfly:20.0.1.Final

verify Memory usage:

$ docker stats
CONTAINER ID        NAME                 CPU %        MEM USAGE / LIMIT     MEM %       NET I/O       BLOCK I/O     PIDS
515e549bc01f        java-wildfly-test    0.14%        219MiB / 300MiB       73.00%      906B / 0B     0B / 0B       43

As expected the container will NOT exceed the memory limit of 300M.

2. Run Wildfly in Kubernetes with a memory limit of 300M:

Now I start the same container within kubernetes.

$ kubectl run java-wildfly-test --image=jboss/wildfly:20.0.1.Final --limits='memory=300M' --env="JAVA_OPTS='-XX:MaxRAMPercentage=75.0'" 

verify memory usage:

$ kubectl top pod java-wildfly-test
NAME                CPU(cores)   MEMORY(bytes)   
java-wildfly-test   1089m        441Mi 

The memory limit of 300M is totally ignored and exceeded immediately.

Why does this happen? Both tests can be performed on the same machine.

Answer

The reason for the high values was an incorrect output of the Metric data received from Project kube-prometheus. After uninstalling the kube-projemet and installing instead the metric-server all data was display correctly using kubctl top. It shows now the same values as docker stats. I do not know why kube-prometheus did compute wrong data. In fact it was providing the double values for all memory data.

Upvotes: 4

Views: 1020

Answers (1)

acid_fuji
acid_fuji

Reputation: 6853

I`m placing this answer as community wiki since it might be helpful for the community. Kubectl top was displying incorrect data. OP solved the problem with uninstalling the kube-prometheus stack and installing the metric-server.

The reason for the high values was an incorrect output of the Metric data received from Project kube-prometheus. After uninstalling the kube-projemet and installing instead the metric-server all data was display correctly using kubectl top. It shows now the same values as docker stats. I do not know why kube-prometheus did compute wrong data. In fact it was providing the double values for all memory data.

Upvotes: 2

Related Questions