Reputation: 9865
I spun up 30 AWS machines.
When I check YARN UI at the master node's ip 8088
, I click on "Nodes" and I can see the following:
I navigate to the spark master at port 18080
I can see that pyspark is telling me that Alive Workers: 30
. At beginning of page.
I restarted all of services on master node and slaves but still same thing happening.
How I get YARN to recognize all of nodes?
Upvotes: 1
Views: 723
Reputation: 71
Check your datanode by below command on your namenode,
sudo yarn node -list -all
and if you can't find all 30 nodes, do below command on your misssing datanode,
sudo service hadoop-yarn-nodemanager start
and do below command on your namenode,
sudo service hadoop-yarn-resourcemanager restart
Or, check /etc/hadoop/conf/slaves
in your namenode,
and check below setting in /etc/hadoop/conf/yarn-site.xml
of all your nodes
<property>
<name>yarn.resourcemanager.hostname</name>
<value>your namenode name</value>
</property>
Or, write your all nodes' names and ipadress in all nodes' /etc/hosts
for example,
127.0.0.1 localhost.localdomain localhost
192.168.1.10 test1
192.168.1.20 test2
and you have to do the command,
/etc/rc.d/init.d/network reload
Upvotes: 1