BraginiNI
BraginiNI

Reputation: 586

java.io.IOException: Failed to add a datanode. HDFS (Hadoop)

I am faced with the error while appending file on HDFS (cloudera 2.0.0-cdh4.2.0). The use case that cause an error is:

Exception in thread "main" java.io.IOException: Failed to add a datanode.
User may turn off this feature by setting dfs.client.block.write.replace-datanode-on- failure.policy in configuration, where the current policy is DEFAULT. (Nodes: current=[host1:50010, host2:50010], original=[host1:50010, host2:50010]) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:792) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:852) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:958) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:469)

Some related hdfs configs:

dfs.replication set to 2

dfs.client.block.write.replace-datanode-on-failure.policy set to true dfs.client.block.write.replace-datanode-on-failure set to DEFAULT

Any ideas? Thanks!

Upvotes: 0

Views: 3393

Answers (1)

BraginiNI
BraginiNI

Reputation: 586

Problem was solved by running on file system

hadoop dfs -setrep -R -w 2 /

Old files on file system had replication factor set to 3, setting dfs.replication to 2 in hdfs-site.xml will not solve the problem as this config will not apply to already existing files.

So, if u remove machines from cluster you better check files and system replication factor

Upvotes: 1

Related Questions