Joe King
Joe King

Reputation: 3011

java.net.ConnectException after changing hostname

I had set up hadoop in standalone mode, with the default hostname "raspberrypi".

Things seemed to be working.

I then changed the hostname to hnode1 by doing:

echo "hnode1" | sudo tee /etc/hostname

and in /etc/hosts I changed

127.0.0.1 raspberrypi

to

127.0.0.1 hnode1

The only other change I made was in core-site.xml :

  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>

was changed to

  <property>
    <name>fs.default.name</name>
    <value>hdfs://hnode1:9000</value>
  </property>

However, after restarting services when attempting to copy from the local file system to hdfs I get this error:

Call From hnode1/127.0.1.1 to hnode1:9000 failed on connection exception: java.net.ConnectException: Connection refused; 

I have also tried rebooting and I have verified that I can ssh to hnode1

Upvotes: 0

Views: 138

Answers (1)

OneCricketeer
OneCricketeer

Reputation: 191874

Your hosts file should look like this

127.0.0.1 localhost

Remove lines with 127.0.1.1 and hard-coded references to the hostname

Your DNS server should know how to resolve hnode1, not have the Pi point back at itself, because then HDFS clients will get looped back to the Pi when communicating to the Namenode.

Your SSH connection proves DNS seems to be working.

And rename the deprecated property fs.default.name to its new name fs.defaultFS

Upvotes: 1

Related Questions