Reputation: 361
My code crashes with this error message
Executing "/usr/bin/java com.utils.BotFilter"
OpenJDK 64-Bit Server VM warning: INFO:
os::commit_memory(0x0000000357c80000, 2712666112, 0) failed;
error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue. Native memory allocation (malloc) failed to allocate 2712666112 bytes for committing reserved memory. An error report file with more information is saved as: /tmp/jvm-29955/hs_error.log`
Here is the content of the generated hs_error.log file
:
This line from crash log seems interesting to me:
Memory: 4k page, physical 98823196k(691424k free), swap 1048572k(0k free)
Does it mean that the machine has memory but is running out of swap space?
Here is meminfo from the crash log but I don't really know how to interpret it, like what is the difference between MemFree and MemAvailable? How much memory is this process taking?
/proc/meminfo
:
MemTotal: 98823196 kB
MemFree: 691424 kB
MemAvailable: 2204348 kB
Buffers: 145568 kB
Cached: 2799624 kB
SwapCached: 304368 kB
Active: 81524540 kB
Inactive: 14120408 kB
Active(anon): 80936988 kB
Inactive(anon): 13139448 kB
Active(file): 587552 kB
Inactive(file): 980960 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1048572 kB
SwapFree: 0 kB
Dirty: 1332 kB
Writeback: 0 kB
AnonPages: 92395828 kB
Mapped: 120980 kB
Shmem: 1376052 kB
Slab: 594476 kB
SReclaimable: 282296 kB
SUnreclaim: 312180 kB
KernelStack: 317648 kB
PageTables: 238412 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 50460168 kB
Committed_AS: 114163748 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 314408 kB
VmallocChunk: 34308158464 kB
HardwareCorrupted: 0 kB
AnonHugePages: 50071552 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 116924 kB
DirectMap2M: 5115904 kB
DirectMap1G: 95420416 kB
Upvotes: 19
Views: 67860
Reputation: 1580
Another possibility (which I encountered just now) would be bad settings for "overcommit memory" on linux.
In my situation, /proc/sys/vm/overcommit_memory
was set to "2" and /proc/sys/vm/overcommit_ratio
to "50" , meaning "don't ever overcommit and only allow allocation of 50% of the available RAM+Swap".
That's a pretty deceptive problem, since there can be a lot of memory available, but allocations still fail for apparently no reason.
The settings can be changed to the default (overcommit in a sensible way) for now (until a restart):
echo 0 >/proc/sys/vm/overcommit_memory
... or permanently:
echo "vm.overcommit_memory=0 >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf # apply it immediately
Note: this can also partly be diagnosed by looking at the output of /proc/meminfo
:
...
CommitLimit: 45329388 kB
Committed_AS: 44818080 kB
...
In the example in the question, Committed_AS
is much higher than CommitLimit
, indicating (together with the fact that allocations fail) that overcommit is enabled, while here both values are close together, meaning that the limit is strictly enforced.
An excellent detailed explanation of these settings and their effect (as well as when it makes sense to modify them) can be found in this pivotal blog entry. (Tl;dr: messing with overcommit is useful if you don't want critical processes to use swap)
Upvotes: 5
Reputation: 51
As Scary Wombat mentions, the JVM is trying to allocate 2712666112 bytes (2.7 Gb) of memory, and you only have 691424000 bytes (0.69 Gb) of free physical memory and nothing available on the swap.
Upvotes: 5
Reputation: 44834
Possible solutions:
Upvotes: 11