Reputation: 757
Recently, i install hadoop multinode cluster on Ubuntu, everything is going, namenode and secondary node (its name HadoopMaster), all slaves are 2 (HadoopDataNode1,HadoopDataNode2)
the problem when start-dfs.sh and start-yarn.sh script is executed, all slaves nodes are running its normal job "datanode and Nodemanager", all working fine, but the master node HadoopMaster when check the report, i get only one datanode which is the MasterNode datanode, but i did not found and DataNode from other datanodes .. all log files looking good, no exception
the result from dfsadmin -report
Configured Capacity: 7791403008 (7.26 GB)
Present Capacity: 1433530368 (1.34 GB)
DFS Remaining: 1433505792 (1.34 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Live datanodes:
Name: 127.0.0.1:50010 (localhost)
Hostname: HadoopMaster
Decommission Status : Normal
Configured Capacity: 7791403008 (7.26 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 6357872640 (5.92 GB)
DFS Remaining: 1433505792 (1.34 GB)
DFS Used%: 0.00%
DFS Remaining%: 18.40%
i found in the logs of datanode that all datanodes are trying to connect to HadoopMaster:9000 and cannot connect
2014-09-16 04:06:32,721 INFO org.apache.hadoop.ipc.Client:
Retrying connect to server:
HadoopMaster/192.168.16.80:9000. Already tried 5 time(s);
retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
but the Namenode are working fine result of JPS on HadoopMaster namenode
21655 SecondaryNameNode
22467 Jps
21514 DataNode
21376 NameNode
21809 ResourceManager
and i checked if the port of HDFS is open
tcp 0 0 HadoopMaster:9000 *:* LISTEN 21376/java
**all datanode accessible to the HadoopMaster vi SSH login passwordless
any suggestion please ..
Upvotes: 0
Views: 1816
Reputation:
It seems there is some configuration issue: Jps from master should not show datanode running.
If you have specifically added master node to behave as a slave in slaves of master node then in jps it should have shown node manager as well.
Please cross check your following files :
/etc/hosts/
core-site.xml
hdfs-site.xml
yarn-site.xml files on all nodes and slaves file on master node.
Upvotes: 2