zhiwei peng
zhiwei peng

Reputation: 81

Why is Internal memory in java Native Memory Tracking increasing

My application is running in a docker container, it use scala and use "OpenJDK 64-Bit Server VM (build 25.102-b14, mixed mode)", its Xmx is set to be 16GB and container memory limit is 24Gb, after running for some time the container is killed:

Last State:         Terminated
  Reason:           OOMKilled
  Exit Code:        137

However I can't find any "java.lang.OutOfMemoryError: Java heap space" errors in the log, not even once in last 2 weeks in all 48 nodes. So it's not likely a normal heap OOM.

dmesg output:

$ dmesg -l err,crit,alert,emerg
STDIN is not a terminal
[1647254.978515] Memory cgroup out of memory: Kill process 10924 (java) score 1652 or sacrifice child
[1647254.989138] Killed process 10924 (java) total-vm:34187148kB, anon-rss:24853120kB, file-rss:23904kB
[1655749.664871] Memory cgroup out of memory: Kill process 1969 (java) score 1652 or sacrifice child
[1655749.675513] Killed process 1969 (java) total-vm:35201940kB, anon-rss:24856624kB, file-rss:24120kB
[1655749.987605] Memory cgroup out of memory: Kill process 2799 (java) score 1656 or sacrifice child

I then run JCMD multiple times before it is killed again and the data looks like the following: Native Memory Tracking:

Total: reserved=25505339KB, committed=25140947KB - Java Heap (reserved=16777216KB, committed=16777216KB) (mmap: reserved=16777216KB, committed=16777216KB)

One thing I noticed is this section: Internal (reserved=6366260KB, committed=6366256KB)

It keeps growing and causing total memory usage to exceed 24GB limit.

Anyone has seen similar issue before? and anyone knows what is Internal memory here and what could be the reason that it keeps growing without releasing the memory?

Upvotes: 8

Views: 3353

Answers (4)

刘思凡
刘思凡

Reputation: 433

Recently Our application met the same problem. In our case, we use netty, and netty allocate direct memory, when many io connnection exists, the internal memory in java Native Memory Tracking increasing.
And finally we use two parameters to limit the native memory.

-Dio.netty.maxDirectMemory=1073741824
-XX:MaxDirectMemorySize=1024m

Upvotes: 4

Erik Finnman
Erik Finnman

Reputation: 1677

I think you may have run into the issue that I answered here: Java Heap Dump : How to find the objects/class that is taking memory by 1. io.netty.buffer.ByteBufUtil 2. byte[] array

If you're running on a node with a large number of cores, you may have to set the environment variable MALLOC_ARENA_MAX to control how native memory is allocated.

Upvotes: 0

Cliff Tian
Cliff Tian

Reputation: 1

Do you have -XX:+DisableExplicitGC configured?

If yes, please remove that.

If no "-XX:+DisableExplicitGC" configured, how about the situation after triggering a full GC via JConsole

Upvotes: 0

Jiří
Jiří

Reputation: 11

This is not answer to your question, it is just a workaround.

I have observed same problem in docker containers running JRuby on java version "1.8.0_45". The solution was to explicitly invoke Garbage Collection. I have absolutely no idea, why this works, but after GC Internal Java memory returned to 8MB.

Upvotes: 1

Related Questions