Pooja N Babu
Pooja N Babu

Reputation: 357

Errors while running hadoop

haduser@user-laptop:/usr/local/hadoop$ bin/hadoop dfs -copyFromLocal /tmp/input 
/user/haduser/input

11/12/14 14:21:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s).

11/12/14 14:21:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s).

11/12/14 14:21:02 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s).

11/12/14 14:21:03 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s).

11/12/14 14:21:04 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s).

11/12/14 14:21:05 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s).

11/12/14 14:21:06 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s).

11/12/14 14:21:07 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. -Already tried 7 time(s).

11/12/14 14:21:08 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s).

11/12/14 14:21:09 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s).

Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused

I am getting the above errors when I'm trying to copy files from /tmp/input to /user/haduser/input even though the file /etc/hosts contain entry for localhost. When the jps command is run, the TaskTracker and the namenode are not listed.

What could be the problem? Please someone help me with this.

Upvotes: 7

Views: 21896

Answers (4)

Abhimanyu
Abhimanyu

Reputation: 31

Try to do ssh to your local system using the IP, in this case:

$ ssh 127.0.0.1

Once you are able to do the ssh successfully. Run the below command to know the list of open ports

~$ lsof -i

look for a listening connector with name: localhost:< PORTNAME > (LISTEN)

copy this < PORTNAME > and replace the existing value of port number in tag of fs.default.name property in your core-site.xml in the hadoop conf folder

save the core-site.xml, this should resolve the issue.

Upvotes: 3

user1482338
user1482338

Reputation: 91

I had similar issues - Actually Hadoop was binding to IPv6. Then I Added - "export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true " to $HADOOP_HOME/conf/hadoop-env.sh

Hadoop was binding to IPv6 even when I had disabled IPv6 on my system. Once I added it to env, started working fine.

Hope this helps someone.

Upvotes: 9

Mayuresh Gadge
Mayuresh Gadge

Reputation: 11

All the files in the bin are exectuables. Just copy the command and paste it in the terminal. Make sure the address is right, i.e. the user must be replaced by something. That would do the trick.

Upvotes: 1

Praveen Sripati
Praveen Sripati

Reputation: 33495

NameNode (NN) maintains the namespace for HDFS and it should be running for filesystem operations on HDFS. Check the logs why the NN hasn't started. TaskTracker is not required for operations on HDFS, only NN and DN are sufficient. Check the http://goo.gl/8ogSk and http://goo.gl/NIWoK tutorials on how to setup Hadoop on a single and multi node.

Upvotes: 1

Related Questions