Reputation: 3069
I want to implement a Pseudo-Distributed hadoop system on my ubuntu machine.But I cannot start the namenode(others like jobtracker can be started normally). my start command is :
./hadoop namenode -format
./start-all.sh
I checked the namenode log located in logs/hadoop-mongodb-namenode-mongodb.log
65 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec cl ock time, 1 cycles
66 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
67 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec c lock time, 1 cycles
68 2013-12-25 13:44:39,799 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
69 2013-12-25 13:44:39,809 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
70 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
71 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
72 2013-12-25 13:44:39,812 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:9000
73 2013-12-25 13:44:39,847 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
74 2013-12-25 13:44:39,878 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
75 2013-12-25 13:44:39,884 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
76 2013-12-25 13:44:39,888 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
77 2013-12-25 13:44:39,889 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mongodb cause:java.net.BindException: Address already in use
78 2013-12-25 13:44:39,889 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted
79 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
80 java.lang.InterruptedException: sleep interrupted
81 at java.lang.Thread.sleep(Native Method)
82 at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
83 at java.lang.Thread.run(Thread.java:701)
84 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number o f syncs: 0 SyncTimes(ms): 0
85 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
86 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
87 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
88 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
89 2013-12-25 13:44:39,909 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use
90 at sun.nio.ch.Net.bind0(Native Method)
91 at sun.nio.ch.Net.bind(Net.java:174)
92 at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
93 at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
94 at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
95 at org.apache.hadoop.http.HttpServer.start(HttpServer.java:602)
96 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
97 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
98 at java.security.AccessController.doPrivileged(Native Method)
99 at javax.security.auth.Subject.doAs(Subject.java:416)
100 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
101 at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
102 at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
103 at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
104 at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
105 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
106
107 2013-12-25 13:44:39,910 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
108 /************************************************************
109 SHUTDOWN_MSG: Shutting down NameNode at mongodb/192.168.10.2
110 ************************************************************/
110,1 Bot
63 2013-12-25 13:44:39,796 INFO org.apache.hadoop.util.HostsFileReader: Refreshing hosts (include/exclude) list
64 2013-12-25 13:44:39,796 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
65 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec cl ock time, 1 cycles
66 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
67 2013-12-25 13:44:39,797 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec c lock time, 1 cycles
68 2013-12-25 13:44:39,799 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source FSNamesystemMetrics registered.
69 2013-12-25 13:44:39,809 INFO org.apache.hadoop.ipc.Server: Starting SocketReader
70 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9000 registered.
71 2013-12-25 13:44:39,810 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9000 registered.
72 2013-12-25 13:44:39,812 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: localhost/127.0.0.1:9000
73 2013-12-25 13:44:39,847 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
74 2013-12-25 13:44:39,878 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
75 2013-12-25 13:44:39,884 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false
76 2013-12-25 13:44:39,888 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070
77 2013-12-25 13:44:39,889 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mongodb cause:java.net.BindException: Address already in use
78 2013-12-25 13:44:39,889 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor thread received InterruptedExceptionjava.lang.InterruptedException: sleep interrupted
79 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
80 java.lang.InterruptedException: sleep interrupted
81 at java.lang.Thread.sleep(Native Method)
82 at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
83 at java.lang.Thread.run(Thread.java:701)
84 2013-12-25 13:44:39,890 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 0 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 0 Number o f syncs: 0 SyncTimes(ms): 0
85 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: closing edit log: position=4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
86 2013-12-25 13:44:39,905 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: close success: truncate to 4, editlog=/var/hadoop/hadoop-1.2.1/dfs.name.dir/current/edits
87 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
88 2013-12-25 13:44:39,909 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
89 2013-12-25 13:44:39,909 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Address already in use
90 at sun.nio.ch.Net.bind0(Native Method)
91 at sun.nio.ch.Net.bind(Net.java:174)
92 at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:139)
93 at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:77)
94 at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
95 at org.apache.hadoop.http.HttpServer.start(HttpServer.java:602)
96 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:517)
97 at org.apache.hadoop.hdfs.server.namenode.NameNode$1.run(NameNode.java:395)
98 at java.security.AccessController.doPrivileged(Native Method)
99 at javax.security.auth.Subject.doAs(Subject.java:416)
100 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
101 at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:395)
102 at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:337)
103 at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)
104 at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
105 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)
106
107 2013-12-25 13:44:39,910 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
This is the error message.It seems obviously , port number went wrong! And below is my conf file: core-site.xml
1 <?xml version="1.0"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <configuration>
4 <property>
5 <name>fs.default.name</name>
6 <value>hdfs://localhost:9000</value>
7 </property>
8 </configuration>
hdfs-site.xml
1 <?xml version="1.0"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3
4 <!-- Put site-specific property overrides in this file. -->
5 <configuration>
6 <property>
7 <name>dfs.replication</name>
8 <value>1</value>
9 </property>
10
11 <property>
12 <name>dfs.name.dir</name>
13 <value>/var/hadoop/hadoop-1.2.1/dfs.name.dir</value>
14 </property>
15
16 <property>
17 <name>dfs.data.dir</name>
18 <value>/var/hadoop/hadoop-1.2.1/dfs.data.dir</value>
19 </property>
20 </configuration>
No matter how I change the port to others and restart hadoop ,the error exists all the same! Anyone can help me ?
Upvotes: 4
Views: 5461
Reputation: 139
The datanode on one of the slave machines in my cluster was throwing a similar exception with port binding:
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.BindException: Address already in use
I noticed that the default web interface port of datanode i.e. 50075 was already bind to another application:
[ap2]-> netstat -an | grep -i 50075
tcp 0 0 10.0.1.1:45674 10.0.1.1:50075 ESTABLISHED
tcp 0 0 10.0.1.1:50075 10.0.1.1:45674 ESTABLISHED
[ap2]->
I have changed the Datanode web interface in conf/hdfs-site.xml
:
<property>
<name>dfs.datanode.http.address</name>
<value>10.0.1.1:50080</value>
<description>Datanode http port</description>
</property>
This helped to resolve the issue, similarly you can change the default address and port where the web interface listens by setting dfs.http.address
in conf/hadoop-site.xml
, e.g. localhost:9090, but ensure that port is available.
Upvotes: 0
Reputation: 3324
Try to remove hdfs data directory and instead of formatting namenode before starting hdfs, start hdfs first and check jps
output. If everything was OK, then try to format namenode and recheck. If still there was a problem give me the log details.
P.S: Do not kill the processes. Just use stop-all.sh
or whatever you should to stop hadoop.
Upvotes: 2