user1684696
user1684696

Reputation: 21

Problems in setting hadoop on mac os x 10.8

I have set something up on my mac for installing hadoop. But there is an error message like this:

13/02/18 04:05:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
13/02/18 04:05:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
13/02/18 04:05:54 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
13/02/18 04:05:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
13/02/18 04:05:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
13/02/18 04:05:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
13/02/18 04:05:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
13/02/18 04:05:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
13/02/18 04:06:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
13/02/18 04:06:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:546)
    at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:318)
    at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:265)
    at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
    at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
    at org.apache.hadoop.ipc.Client.call(Client.java:1075)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy1.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
    at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:542)
    ... 17 more
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
    at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1206)
    at org.apache.hadoop.ipc.Client.call(Client.java:1050)
    ... 31 more

then I enter jps to check the service, the result is : 20635 Jps

20466 TaskTracker

20189 DataNode

20291 SecondaryNameNode

I don't know how to deal with this error. could someone give me an answer? Thx a Lot!!!

Upvotes: 1

Views: 2959

Answers (3)

Ralf Oenning
Ralf Oenning

Reputation: 11

I had this same error: ConnectException: Connection refused in all the secondarynamenode log files.

But I also found this in the namenode's log file:

2015-10-25 16:35:15,720 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /private/tmp/hadoop-admin/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

I therefore did a

hadoop namenode -format

and the problem has gone away. Thus, the error message was just about the fact that the namenode had not started successfully.

Upvotes: 1

Mohd Gaazi Sheikh
Mohd Gaazi Sheikh

Reputation: 11

This might help a bit. But first U gotta remove the earlier installation using the command

rm -rf /usr/local/Cellar /usr/local/.git && brew cleanup

Then U may begin with a fresh installation of Hadoop on U're Mac.

Step 1: Install Homebrew

$ ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)"

Step 2: Install Hadoop

$ brew install hadoop

Assume that brew installs Hadoop 1.2.1

Step 3: Configure Hadoop

$ cd /usr/local/Cellar/hadoop/1.2.1/libexec

Add the following line to conf/hadoop-env.sh:

export HADOOP_OPTS="-Djava.security.krb5.realm= -Djava.security.krb5.kdc="

Add the following lines to conf/core-site.xml inside the configuration tags:

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

Add the following lines to conf/hdfs-site.xml inside the configuration tags:

<property>
<name>dfs.replication</name>
<value>1</value>
</property>

Add the following lines to conf/mapred-site.xml inside the configuration tags:

 <property>
 <name>mapred.job.tracker</name>
 <value>localhost:9001</value>
 </property>

Go to System Preferences > Sharing. Make sure “Remote Login” is checked.

$ ssh-keygen -t rsa $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Step 4: Enable SSH to localhost

$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Step 5: Format Hadoop filesystem

$ bin/hadoop namenode -format

Step 6: Start Hadoop

$ bin/start-all.sh

Make sure that all Hadoop processes are running:

$ jps

Run a Hadoop example:

One more thing ! “brew update” will update the hadoop binaries to recent version (1.2.1 at present).

Upvotes: 1

Freak
Freak

Reputation: 6883

Actually all processes that (hadoop should run) are not running because of misconfiguration of IP.I am not familiar with Mac OS but on linux and windows we need to put hadoop enteries for connection in hosts files(etc/hosts) and I am damn sure that it should be for Mac.Now the point is
You need to put your hadoop entry in that file as a local mnachine like 127.0.0.1
Actually you need to put it against actual IP of your machine
For example
hadoop-machine 127.0.0.1 -->(placing loop back IP is wrong here because hadoop will try to connect with this IP).
remove this 127.0.0.1 and place the actual IP of your machine infront of this entry.You can find ip of your mac machine eaisly.here are some questions which are not directly related to hadoop but I guess they would be helpful for you.
Question 1 , Question 2, Question 3

Upvotes: 2

Related Questions