Lovish saini
Lovish saini

Reputation: 117

Hadoop job keeps running and no container is allocated

I tried running a mapreduce job in Hadoop 2.8.5 but it keeps running. The Application State is as below: YarnApplicationState: ACCEPTED: waiting for AM container to be allocated, launched and register with RM.

RM web UI: enter image description here

The health-report says: 1/1 local-dirs are bad: /home/hduser/hadooptmpdata/nm-local-dir; 1/1 log-dirs are bad: /home/hduser/hadoop-2.8.5/logs/userlogs

enter image description here

core-site.xml

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hduser/hadooptmpdata</value>
</property>
</configuration>

hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<name>dfs.name.dir</name>
<value>file:///home/hduser/hdfs/namenode</value>
<name>dfs.data.dir</name>
<value>file:///home/hduser/hdfs/datanode</value>
</property>
</configuration>

yarn-site.xml

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<property>
<name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
<value>100</value>
</property>

<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>3</value>
</property>     

<property>
<name>yarn.scheduler.minimum-allocation-vcores</name>
<value>1</value>
</property>

<property>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>3</value>
</property>

<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>

<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2048</value>
</property>

<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>4096</value>
</property>

<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/home/hduser/hadooptmpdata/nm-local-dir</value>
</property>
</configuration>

mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>2048</value>
</property>
<property>
<name>mapreduce.map.cpu.vcores</name>
<value>2</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>2048</value>
</property>
<property>
<name>mapreduce.reduce.cpu.vcores</name>
<value>2</value>
</property>
<property>
<name>mapreduce.cluster.local.dir</name>
<value>/home/user/hduser/hadooptmpdata/mapred/local</value>
</property>
</configuration>

I am running Hadoop on ubuntu and my pc have intel i7 processor with 16 gb RAM and 256 GB SSD

Upvotes: 0

Views: 1643

Answers (1)

tk421
tk421

Reputation: 5967

YARN's Resource Manager need compute resources from Node Manager(s) in order to run anything. Your Node Manager shows it's local directory is bad. Which means you have no compute resources available (which is verified looking at your cluster metrics. See all the zeros.) which is why your application is stuck in "ACCEPTED".

enter image description here

Fix your yarn.nodemanager.local-dirs and make sure YARN has full permissions on it to proceed.

Upvotes: 2

Related Questions