Vinnie
Vinnie

Reputation: 302

JVM memory inconsistencies on Heroku

I have an app being hosted on Heroku on a single dyno with 1GB of RAM. I'm observing odd behavior with regard to memory. When my app is being used, I notice the total memory consumed on the dyno continuing to go up with load/usage (which screams memory leak but see later...) but never comes back down when garbage collection runs. I can see in my JVM graphs that heap space is regularly being reclaimed by the JVM but I never see corresponding reductions in total memory usage; it only ever seems to increase.

See the graphs below:

Total memory usage

Heap and non-heap usage

I have profiled a heap dump using Eclipse MAT and did not find anything telling. Also, I have added parameters to the JVM as described here to bound the JVMs target memory consumption to the container and not the server itself.

If anyone can point me in the right direction as to why there is inconsistency between the dyno memory reported by Heroku and what I'm seeing on the heap and non-heap space graphs for the JVM it would be greatly appreciated.

Upvotes: 1

Views: 324

Answers (2)

Stephen C
Stephen C

Reputation: 719189

There is possibly a memory leak in your Java code, but the evidence is not conclusive.

But as noted, the external (dymo) reporting of memory usage is bound to be different to the internal (JVM heap) reporting:

  1. The JVM tends to not give memory back to the OS after a GC run. It tends to keep the memory ... so that it can put new objects into it.

  2. The JVM uses memory that is not part of the regular heap.

  3. The memory usage of other processes will be included in the dymo reporting (obviously).

Now if your JVM memory graph showed that the level of the bottom of the troughs was consistently increasing over time, that would be strong sign of a (probable) memory leak. Especially if the peak heap usage was regularly approaching your configured JVM max_heap. However:

  • The graphs don't show enough data to draw any conclusions
  • If the heap isn't getting full, you won't trigger the breaking of WeakReferences, so GC aware caches may just be filling up. (Which is probably good, not bad.)

Finally, if I am reading the graphs correctly, you have ~1GB of RAM free and you are running the JVM with a 1GB max heap. That is asking for trouble.

(If your JVM causes virtual memory thrashing, it is liable to be killed by the operating system's OOM killer. Or worse ... it could conceivably kill some other process that is more important.)

Upvotes: 1

Baran Bursalı
Baran Bursalı

Reputation: 360

Garbage collection in jvm-level (it marks memory available for other objects) but the first graph (memory usage) is OS level. Container doesn't have to free the memory back to OS when it's garbage collected. Total usage may not decrease by heap usage but if it wasn't collected it could break the limit.

At 10:30AM, your heap got bigger and your OS level usage increased and doesn't give back what it took.

Also you can define your xms and xmx same and it'd be probably more efficient, just don't forget your limit is not just the heap size there' re other things too. Such as stacks, registers etc.

Resident Memory (memory_rss): The portion of the dyno’s memory (megabytes) held in RAM. https://devcenter.heroku.com/articles/log-runtime-metrics#memory-swap

Upvotes: 0

Related Questions