kww
kww

Reputation: 541

Hadoop Containder is running beyond physical memory limits

While running Hadoop task, I got the following error

Container [pid=12850,containerID=container_1489504424139_0638_01_201123] is running beyond physical memory limits. Current usage: 4.0 GB of 4 GB physical memory used; 8.8 GB of 8.4 GB virtual memory used. Killing container.

I searched in stackoverflow, it gives me several pages(Link1, Link2). But it does not help. I still got the error. My current mapred-site.xml file is the following:

<configuration>
<property>
  <name>mapreduce.framework.name</name>
  <value>yarn</value>
</property>
<property>
  <name>mapreduce.map.java.opts</name>
  <value> -Xmx3072m</value>
</property>
<property>
  <name>mapreduce.reduce.java.opts</name>
  <value> -Xmx6144m</value>
</property>
<property>
  <name>mapreduce.map.memory.mb</name>
  <value>4096</value>
</property>
<property>
  <name>mapreduce.reduce.memory.mb</name>
  <value>8192</value>
</property>
<property>
  <name>mapred.child.java.opts</name>
  <value>-Xmx2048m</value>
</property>
</configuration>

Thanks!

Upvotes: 0

Views: 1614

Answers (2)

kww
kww

Reputation: 541

I tried changed the xml files. But later I found if I make my python code ( it is building object through some java api which depends on some C++ api ) more memory-friendly, i.e. if it's out of scope, then I call its destructor explicitly. Then the problem is gone!

Upvotes: 0

K S Nidhin
K S Nidhin

Reputation: 2650

Try using these properties :

mapreduce.map.output.compress
mapreduce.map.output.compress.codec

OR

Changing the memory allocation props:

mapreduce.map.memory.mb
mapreduce.reduce.memory.mb

Upvotes: 1

Related Questions