Reputation: 619
I tried to write a file to my local HDFS setup using a java program I am using Hadoop 2.3.0
distribution and hadoop-client 2.3.0
hadoop-hdfs 2.3.0
libraries.
In the HDFS log it shows the following error:
2014-04-07 18:40:44,479 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: prabhathp:50010:DataXceiver error processing unknown operation src: /127.0.0.1:38572 dest: /127.0.0.1:50010
java.io.IOException: Version Mismatch (Expected: 28, Received: 26738 )
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:54)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:198)
at java.lang.Thread.run(Thread.java:744)
Can somebody explain this?
Upvotes: 5
Views: 4617
Reputation: 5782
If the error Version Mismatch (Expected: 28, Received: 26738 )
is seen intermittently with a very high Received
-Version, the cause can be that an application that does not use the hadoop rpc protocoll has connected to the datenode port.
We see this error for instance when somebody access the datanode url with a web browser (while intending to access the web-interface).
A misconfiguration can have similar effects.
Upvotes: 2
Reputation: 22989
The problem (for me) was an incorrect configuration of the properties dfs.namenode.name.dir
and dfs.datanode.data.dir
in the hdfs-site.xml
file; they have to be URI and not just paths.
<property>
<name>dfs.namenode.name.dir</name>
<value>/dfs/nn</value> # wrong, change to 'file:///dfs/nn'
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/dfs/dn</value> # wrong, change to 'file:///dfs/dn'
</property>
Upvotes: 1
Reputation: 492
java.io.IOException: Version Mismatch (Expected: 28, Received: 26738 )
Verison mismatch error indicates you are usng wrong hadoop jar. Ensure that the data.dir or name.dir has the correct VERSION file and you are using the correct hadoop version.
Run hadoop verison to confirm .
Upvotes: 0