Reputation: 29
I have created a hadoop cluster using two nodes
h01 : Main machine - ubuntu desktop 15.04
h02 : virtual machine using vmware on my main machine - ubuntu server 14.04
jps command shows namenode and secondarynamenode on h01 and datanode on h02 and the Web UI for namenode shows datanode so they are successfully connected.
Problem is when I issue the command :
hdfs dfs -copyFromLocal input /
It gives the following error :
16/03/14 14:29:55 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1610)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
I am new to hadoop and any help would be appreciated. Following are my configuration files :
file : /etc/hosts machine : h01
127.0.0.1 localhost
127.0.1.1 hitesh-SVE15136CNB
192.168.93.128 h02
172.16.87.68 h01
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
file : /etc/hosts machine : h02
127.0.0.1 localhost
127.0.1.1 ubuntu
172.16.87.68 h01
192.168.93.128 h02
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
file : core-site.xml machine : both are same
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://h01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop</value>
</property>
</configuration>
file : hdfs-site.xml machine : both are same
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
masters contain h01 and slaves contain h02. I have ensured ssh connectivity without password between the two machines.
EDIT :
I found the problem. In the datanodes tab in the namenode UI, it is showing the right datanode but wrong IP(it shows ip of namenode rather than datanode). I tried installing namenode in a different virtual machine and it is working. But still can't understand where the above mentioned configuration is wrong. Please help
Upvotes: 0
Views: 1455
Reputation: 2691
See below url. It can be useful:-
https://wiki.apache.org/hadoop/ConnectionRefused
Upvotes: 0