Reputation: 17
We have 4 nodes Hadoop cluster. 2 Master nodes 2 data nodes After sometimes we found that our data nodes are failing. then, we go and see the log section it always tell cannot allocate memory.
ENV
HDP 2.3.6 VERSION
HAWQ 2.0.0 VERSION
linux os : centos 6.0
Getting following error
Data nodes are crashing WITH following logs
os::commit_memory(0x00007fec816ac000, 12288, 0) failed; error='Cannot allocate memory' (errno=12)
Memory Info
vm_overcommit ratio is 2
MemTotal: 30946088 kB
MemFree: 11252496 kB
Buffers: 496376 kB
Cached: 11938144 kB
SwapCached: 0 kB
Active: 15023232 kB
Inactive: 3116316 kB
Active(anon): 5709860 kB
Inactive(anon): 394092 kB
Active(file): 9313372 kB
Inactive(file): 2722224 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 15728636 kB
SwapFree: 15728636 kB
Dirty: 280 kB
Writeback: 0 kB
AnonPages: 5705052 kB
Mapped: 461876 kB
Shmem: 398936 kB
Slab: 803936 kB
SReclaimable: 692240 kB
SUnreclaim: 111696 kB
KernelStack: 33520 kB
PageTables: 342840 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 31201680 kB
Committed_AS: 26896520 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 73516 kB
VmallocChunk: 34359538628 kB
HardwareCorrupted: 0 kB
AnonHugePages: 2887680 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 6132 kB
DirectMap2M: 2091008 kB
DirectMap1G: 29360128 kB
Upvotes: 0
Views: 365