x Lu
x Lu

Reputation: 41

hadoop fs -mkdir failed on connection exception

I have been trying to set up and run Hadoop in the pseudo Distributed Mode.But when I type

bin/hadoop fs -mkdir input

I get

mkdir: Call From h1/192.168.1.13 to h1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

here is the details

core-site.xml

<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/grid/tmp</value>
  </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://h1:9000</value>
    </property>
</configuration>

mapred-site.xml

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>h1:9001</value>
    </property>

  <property>
    <name>mapred.map.tasks</name>
    <value>20</value>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>4</value>
  </property>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>mapreduce.jobtracker.http.address</name>
    <value>h1:50030</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>h1:10020</value>
  </property>
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>h1:19888</value>
  </property>

</configuration>

hdfs-site.xml

<configuration>

  <property>
    <name>dfs.http.address</name>
    <value>h1:50070</value>
  </property>
  <property>
    <name>dfs.namenode.rpc-address</name>
    <value>h1:9001</value>
  </property>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>h1:50090</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/home/grid/data</value>
  </property>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
</configuration>

/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.13 h1
192.168.1.14 h2
192.168.1.15 h3

After hadoop namenode -format and start-all.sh

1702 ResourceManager
1374 DataNode
1802 NodeManager
2331 Jps
1276 NameNode
1558 SecondaryNameNode

the problem occurs

[grid@h1 hadoop-2.6.0]$ bin/hadoop fs -mkdir input
15/05/13 16:37:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
mkdir: Call From h1/192.168.1.13 to h1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

Where is the problems?

hadoop-grid-datanode-h1.log

2015-05-12 11:26:20,329 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = h1/192.168.1.13
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.0

hadoop-grid-namenode-h1.log

2015-05-08 16:06:32,561 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = h1/192.168.1.13
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.0

why the port 9000 does not work?

[grid@h1 ~]$ netstat -tnl |grep 9000
[grid@h1 ~]$ netstat -tnl |grep 9001
tcp        0      0 192.168.1.13:9001           0.0.0.0:*                   LISTEN     

Upvotes: 2

Views: 5962

Answers (4)

Matthew C
Matthew C

Reputation: 716

This command worked for me:

hadoop namenode -format

Upvotes: 0

KayV
KayV

Reputation: 13835

Following procedure resolved the issue for me:

  1. Stop all the services.

  2. Delete namenode and datanode directories as specified in hdfs-site.xml.

  3. Create new namenode and datanode directories and modify hdfs-site.xml accordingly.

  4. In core-site.xml, make the following changes or add the following properties:

    fs.defaultFS hdfs://172.20.12.168/ fs.default.name hdfs://172.20.12.168:8020

  5. Make the following changes in hadoop-2.6.4/etc/hadoop/hadoop-env.sh file:

    export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home

  6. Restart dfs, yarn and mr as follows:

    start-dfs.sh start-yarn.sh mr-jobhistory-daemon.sh start historyserver

Upvotes: 0

Rahul Vishwakarma
Rahul Vishwakarma

Reputation: 31

Please start dfs and yarn.

[hadoop@hadooplab sbin]$ ./start-dfs.sh

[hadoop@hadooplab sbin]$ ./start-yarn.sh

Now try using "bin/hadoop fs -mkdir input"

The issue usually comes when you install hadoop in a VM and then shut it down. When you shut down VM, dfs and yarn also stops. So you need to start dfs and yarn each time you restart the VM.

Upvotes: 2

Raghuveer
Raghuveer

Reputation: 3057

Firstly try command

bin/hadoop dfs -mkdir input

If you have followed micheal-roll post properly then you should not have any issue. I suspect that passwordless ssh is not working in your configuration, recheck it.

Upvotes: 0

Related Questions