Spring boot, JVM too many or just enough parameters?

I'm trying to tune production JVMs for spring boot Microservices and for now I made this list

-XX:+UnlockExperimentalVMOptions 
-XX:+UseCGroupMemoryLimitForHeap 
-XX:MaxRAMFraction=2
-XX:+UseStringDeduplication 
-XX:+PrintStringDeduplicationStatistics 
-XX:+CrashOnOutOfMemoryError 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:+UseG1GC 
-XX:+PrintGCDetails 
-XX:+PrintGCDateStamps 
-Xloggc:/tmp/gc.log 
-XX:+UseGCLogFileRotation 
-XX:NumberOfGCLogFiles=5 
-XX:GCLogFileSize=2000k 
-XX:HeapDumpPath='/var/log/heap_dump.log' 
-XX:+UseGCOverheadLimit
-XX:NativeMemoryTracking=summary
-XX:+UnlockDiagnosticVMOptions 
-XX:+PrintNMTStatistics

What do you think, if I noticed that non of them 'duplicate' their functionality but I'm still not 100% is it enough or maybe I could add/remove some of them without worry of losing information.

my aim is to get as many information what is happening in jvm as I can get and to tune memory/gc performance to avoid oom. App is running on was in docker.

Some details: jdk 1.8 u152 spring-boot(s): 1.5.1

Upvotes: 1

Views: 784

Answers (1)

Peter Lawrey
Peter Lawrey

Reputation: 533442

Currently i'am dealing with situations that app is just working few seconds or even minutes and immediately die without simple log and event error in Dropwizard Metrics

A process shouldn't just die, it should have either an exception or a crash dump. If your machine is overloaded, your process might get killed on linux to protect the system. If this happens it should be logged in /var/log/messages

See https://unix.stackexchange.com/questions/136291/will-linux-start-killing-my-processes-without-asking-me-if-memory-gets-short

If your program is randomly calling System.exit(int) your SecurityManager should be preventing it or at least logging it.

So goal is to have memory optimized fully logged jvm.

Unfortunately, many of the logs you mention are buffered, so if the process is killed you are likely to lose the last few entries, possible the last few minutes of logging. These logs are useful for diagnosing performance issues, but might not help determine why the process dies unexpectedly.

tune memory/gc performance to avoid oom

This is different sort of problem. You need to try

  • giving the process a lot more memory and see at what point the process doesn't die.
  • if this works, your process needs more memory, if your process continues to consume memory over time you might have a memory leak.
  • if you keep getting OutOfMemoryErrors in the similar places in the code, this is most likely the place where it is consuming too much memory.
  • most likely, you don't have enough memory for the tasks it is performing. I would then look at the memory profile using a profiler, such as flight recorder, to see if you can reduce how much is used. At some point you either solve the problem by reducing usage, or have to give the process more memory.

Given memory is cheap, and your time is not, it might be simpler to increase memory. i.e. keep in mind how much memory can you buy for a day of your time.

Upvotes: 1

Related Questions