Tiina
Tiina

Reputation: 4797

java heap memory management insufficient memory

When a netty async server-and-client project is running on linux, it runs out all available memories, like this: linux console

So I run it on windows, and JMC show heap like this:

JMC memory

My questions are: why windows and linux behaves differently, is there somewhere I could configure linux jvm to have a heap memory release? And why there is a heap release in windows (GC)? How to find out suspicious piece of code that takes up so much memory?

EDIT: linux is 4G, windows is 8G, but I don't think the absolute value causes the running result differences. Project does not directly handle raw bytebuff, it uses HttpServerCodec and HttpObjectAggregator for bytebuf. The command to run in linux is java -jar xx.jar. I would like to know not only why difference, why sawtooth but also how to locate the one that takes up so much memory. JMC shows another figure, and I don't know why a thread can have such a high block number. netty threads IO have a 99LINE 71ms. JMC threads

UPDATED: Now I would like to locate which part of the code takes up so much memory. JMC heap shows EDEN SPACE is very high, and I bing it and found out EDEN SPACE is for new object. Originally, the project used spring-boot which has tomcat servlet 3.0 as container and a apache httpclient pool for client, now only these parts has been changed by using netty asynchronous server and netty asynchronous client, while other parts are remained (still use spring for bean management). Netty server and client handlers are shared for all requests (handlers are singleton spring beans). With such small changes, I don't believe the amount of new objects are significantly increased that it ends in 1.35G memory JMC heap

UPDATEAfter running netty and springboot projects separately, I get more statistical data:

  1. OS memory 8G. springboot edition project: PS Old Generation: capacity=195MB; used=47MB; 24% used. And it has 692,971 objects with a total size 41,848,384.
  2. OS memory 16G. netty version project: PS Old Generation: capacity = 488MB; used 327MB; 67% used. It has 1,243,432 objects with a total size 221,427,824.

netty version: heap dump shows it has a 279,255 instances of class io.netty.buffer.PoolSubpage compared to the 2nd most 7,222 instances of class org.springframework.core.MethodClassKey. Both versions have service (our own class) objects limited, no more than 3000.

I have tried to run with -Xmx1024m on 4G memory linux, still causes the same out of memory problem.

Upvotes: 2

Views: 2298

Answers (1)

Stephen C
Stephen C

Reputation: 719229

The behavior you are seeing on Windows is normal GC behaviour. The application is generating garbage, and then you hit a threshold that causes the GC to run. The GC frees a lot of heap. And then the application starts again. The result is a sawtooth pattern in the heap occupancy.

This is normal. Every JVM behaves more or less like this.


The behavior on Linux looks like something is trying to allocate something large (77MB) in native memory, and failing because the OS is refusing to give the JVM that much memory. Typically that happens because the OS has run out of resources; e.g. physical RAM, swap space, etc.

Windows 8G, linux 4G.

That probably explains it. Your Linux system has only half the physical memory of the Windows system. If you are running netty with a large Java heap AND your Linux OS has not been configured with any swap space, then it is plausible that the JVM is using all of the available virtual memory. It could even be happening at JVM startup.

(If we assume that the max heap size has been set the same for both Windows and Linux, then on Windows there is at least 4.5GB of virtual address space available for other things. On Linux, only 0.5GB. And that 0.5GB has to hold all of the non-heap JVM utilization ... plus the OS and various other user-space processes. It is easy to see how you could have used all of that ... leading to the allocation failure.)

If my theory is correct, then the solution would be to change the JVM command line options to make -Xmx smaller.

(Or increase the available physical / virtual memory. But be careful with increasing the virtual memory by adding swap space. If the virtual/physical ratio is too large you can get virtual memory "thrashing" which can lead to terrible performance.)

Upvotes: 3

Related Questions