Reputation: 4276
I set up Hadoop 2.6.0 with 1 master and 2 slaves according to How to install Apache Hadoop 2.6.0 in Ubuntu (Multi node/Cluster setup). After all I checked jps on master and slaves, all looked good: NameNode, SecondaryNameNode, ResourceManager on master; and DataNode, NodeManager on slaves. But when I browsed to hadoopmaster:8088, there was 0 active nodes. Also when I run
hadoop fs -put ~/h-localdata/* /input/
It showed this error:
put: File /input-01/h-localdata/README.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
Please help me solve this!
Upvotes: 1
Views: 5184
Reputation: 11
I solved this issue by disabling the firewall on my master and slave (both on CentOS 7
) as follows:
systemctl stop firewalld.service
systemctl disable firewalld.service
Upvotes: 1
Reputation: 31
Make sure that the datanode points to the correct master. Check hdfs dfsadmin -report to check the report of the cluster.
Upvotes: 1
Reputation: 1
For me it was the entry in /etc/hosts that forced the hadoop master node to listen on the loopback adapter only, so the clients could not reach it.
Upvotes: 0
Reputation: 4276
I checked the log files in slaves, it pointed out that:
WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: hadoop.master/192.168.10.52:9000
I have read many suggestions, most of them are related with /etc/hosts file, but not my case. I disabled firewall on master (centos 6.5) as:
# service iptables save
# service iptables stop
# chkconfig iptables off
This worked perfectly. Hope this also helps anyone else.
Upvotes: 1
Reputation: 82
try deleting the files in the "temp", "datanode", "namenode" folders respectively and format the namenode, and then try again.
Upvotes: 0