CoderOnly
CoderOnly

Reputation: 1740

cdh4.3,Exception from the logs ,after ./start-dfs.sh ,datanode and namenode start fail

here is the logs from hadoop-datanode-...log:

FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-1421227885-192.168.2.14-1371135284949 (storage id DS-30209445-192.168.2.41-50010-1371109358645) service to /192.168.2.8:8020 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0, storageID=DS-30209445-192.168.2.41-50010-1371109358645, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=CID-f16e4a3e-4776-4893-9f43-b04d8dc651c9;nsid=1710848135;c=0) at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:648) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3498)

my mistake:namenode can start,datanode can't start

Upvotes: 2

Views: 3780

Answers (3)

shamouda
shamouda

Reputation: 81

Reverse DNS lookup is required when a datanode tries to register with a namenode. I got the same exceptions with Hadoop 2.6.0 because my DNS does not allow reverse lookup.

But you can disable Hadoop's reverse lookup by setting this configuration "dfs.namenode.datanode.registration.ip-hostname-check" to false in hdfs-site.xml

I got this solution from here and it solved my problem.

Upvotes: 0

osscalvin
osscalvin

Reputation: 101

I saw this once too, the namenode server needs to do a reverse lookup request , so an nslookup 192.168.2.41 should return a name, it doesn't so 0.0.0.0 is also recorded

You don't need to hardcode address into /etc/hosts if you have dns working correctly (i.e. the in-addr.arpa file matches the entries in domain file) But if you don't have dns then you need to help hadoop out.

Upvotes: 9

Tariq
Tariq

Reputation: 34184

There seems to be a Name Resolution issue.

Datanode denied communication with namenode: DatanodeRegistration(0.0.0.0, storageID=DS-30209445-192.168.2.41-50010-1371109358645, infoPort=50075, ipcPort=50020,

Here DataNode is identifying itself as 0.0.0.0.

Looks like dfs.hosts enforcement. Can you recheck on your NameNode's hdfs-site.xml configs that you are surely not using a dfs.hosts file?

This error may arise if the datanode that is trying to connect to the namenode is either listed in the file defined by dfs.hosts.exclude or that dfs.hosts is used and that datanode is not listed within that file. Make sure the datanode is not listed in excludes, and if you are using dfs.hosts, add it to the includes. Restart hadoop after that and run hadoop dfsadmin -refreshNodes.

HTH

Upvotes: 1

Related Questions