xiadc
xiadc

Reputation: 23

In Hadoop-2.6.0, container was killed for not enough virtual memory

I am trying to implement jcuda code on hadoop,and it worked in local mode,but when I run the job on hadoop cluster,it gives me a error:the container was killed here is the specific error report:

16/04/29 10:18:07 INFO mapreduce.Job: Task Id : attempt_1461835313661_0014_r_000009_2, Status : FAILED Container [pid=19894,containerID=container_1461835313661_0014_01_000021] is running beyond virtual memory limits. Current usage: 197.5 MB of 1 GB physical memory used; 20.9 GB of 2.1 GB virtual memory used. Killing container.

the input data is just 200MB,but the job ask for 20.9GB virtual memory I don't konw why.and I have tried to increase the virtual memory ,and the configuration is in yarn-site.xml:

<property>
   <name>yarn.nodemanager.vmem-pmem-ration</name>
   <value>12</value>
</property>

 <property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
 </property>

 <property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
 </property>

it is not working ,I don't konw to slove it,and I'm sorry for my poor English.

Upvotes: 0

Views: 300

Answers (1)

Bijoy
Bijoy

Reputation: 113

    Please check the following parameters and set it if not set to the values below:

    In mapred-site.xml:

    mapreduce.map.memory.mb: 4096

    mapreduce.reduce.memory.mb: 8192

    mapreduce.map.java.opts: -Xmx3072m

    mapreduce.reduce.java.opts: -Xmx6144m

Hope this solves your issue

Upvotes: 0

Related Questions