Reputation: 11
I'm trying to configure Hadoop in fully distributed mode with 1 master and 1 slave as different nodes. I have attached a screenshot showing the status of my master and slave nodes.
In Master: ubuntu@hadoop-master:/usr/local/hadoop/etc/hadoop$ $HADOOP_HOME/bin/hdfs dfsadmin -refreshNodes
refreshNodes: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "hadoop-master/127.0.0.1"; destination host is: "hadoop-master":8020;
This is the error I'm getting when I try to run the refresh nodes command. Can anyone tell me what I'm missing or what mistake have I done ?
2016-04-26 03:29:17,090 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state 2016-04-26 03:29:17,095 INFO org.mortbay.log: Stopped [email protected]:50070 2016-04-26 03:29:17,095 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2016-04-26 03:29:17,095 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2016-04-26 03:29:17,096 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2016-04-26 03:29:17,097 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.net.BindException: Problem binding to [hadoop-master:8020] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:425) at org.apache.hadoop.ipc.Server$Listener.(Server.java:574) at org.apache.hadoop.ipc.Server.(Server.java:2215) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:938) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:534) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:783) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:344) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:673) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:811) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:795) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:463) at sun.nio.ch.Net.bind(Net.java:455) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:408) ... 13 more 2016-04-26 03:29:17,103 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2016-04-26 03:29:17,109 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at hadoop-master/127.0.0.1 ************************************************************/ ubuntu@hadoop-master:/usr/local/hadoop$
Upvotes: 1
Views: 2859
Reputation: 20830
Your error logs clearly say that It is not able to bind the default port.
java.net.BindException: Problem binding to [hadoop-master:8020] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException
You need to change the default port to some port which is free.
Here is the list of ports given in hdfs-default.xml and here.
Upvotes: 0
Reputation: 3374
Check your namenode address in core-site.xml. Change to 50070 or 9000 and try
The default address of namenode web UI is http://localhost:50070/. You can open this address in your browser and check the namenode information.
The default address of namenode server is hdfs://localhost:8020/. You can connect to it to access HDFS by HDFS api. The is the real service address.
http://blog.cloudera.com/blog/2009/08/hadoop-default-ports-quick-reference/
Upvotes: 0
Reputation: 202
DFS needs to be formatted. Just issue the following command ;
hadoop namenode -format
Or
hdfs namenode -format
Upvotes: 0