Reputation: 533
I have two linux machine one is master machine(192.168.8.174)
and another one is slave machine(192.168.8.173)
. I have installed and configured Hadoop 2.6.0 in fully distributed mode successfully. Hadoop output also coming perfectly. I installed and configured HBase 1.0. When I start hbase the output like below
master machine slave machine
HMaster HQuorumpeer
HQuorumpeer RegionServer
HRegionServer
But when I create table(EXAMPLE:create 'test','cf')
it shows error like below in hbase log file
015-03-19 16:46:04,930 INFO [master/master/192.168.8.174:16020-SendThread(192.168.8.173:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.8.173/192.168.8.173:2181. Will not attempt to authenticate using SASL (unknown error)
2015-03-19 16:46:04,952 INFO [master/master/192.168.8.174:16020-SendThread(192.168.8.173:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.8.173/192.168.8.173:2181, initiating session
2015-03-19 16:46:04,963 INFO [master/master/192.168.8.174:16020-SendThread(192.168.8.173:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.8.173/192.168.8.173:2181, sessionid = 0x14c3135d05c0001, negotiated timeout = 90000
2015-03-19 16:46:04,964 INFO [master/master/192.168.8.174:16020] client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
2015-03-19 16:46:04,992 FATAL [master:16020.activeMasterManager] master.HMaster: Failed to become active master
java.net.ConnectException: Call From master/192.168.8.174 to master:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1415)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)
at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:447)
at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:894)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:416)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:145)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:125)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:591)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:165)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1425)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
at org.apache.hadoop.ipc.Client.call(Client.java:1382)
... 29 more
2015-03-19 16:46:05,002 FATAL [master:16020.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call From master/192.168.8.174 to master:54310 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
at org.apache.hadoop.ipc.Client.call(Client.java:1415)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986)
at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970)
at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:447)
at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:894)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:416)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:145)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:125)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:591)
at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:165)
at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1425)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
at org.apache.hadoop.ipc.Client.call(Client.java:1382)
... 29 more
2015-03-19 16:46:05,002 INFO [master:16020.activeMasterManager] regionserver.HRegionServer: STOPPED: Unhandled exception. Starting shutdown.
2015-03-19 16:46:08,046 INFO [master/master/192.168.8.174:16020] ipc.RpcServer: Stopping server on 16020
2015-03-19 16:46:08,046 INFO [RpcServer.listener,port=16020] ipc.RpcServer: RpcServer.listener,port=16020: stopping
2015-03-19 16:46:08,047 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2015-03-19 16:46:08,047 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2015-03-19 16:46:08,049 INFO [master/master/192.168.8.174:16020] regionserver.HRegionServer: Stopping infoServer
2015-03-19 16:46:08,089 INFO [master/master/192.168.8.174:16020] mortbay.log: Stopped [email protected]:16030
2015-03-19 16:46:08,191 INFO [master/master/192.168.8.174:16020] regionserver.HRegionServer: stopping server master,16020,1426754759593
2015-03-19 16:46:08,191 INFO [master/master/192.168.8.174:16020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x14c3135d05c0001
2015-03-19 16:46:08,241 INFO [master/master/192.168.8.174:16020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2015-03-19 16:46:08,242 INFO [master/master/192.168.8.174:16020] zookeeper.ZooKeeper: Session: 0x14c3135d05c0001 closed
2015-03-19 16:46:08,244 INFO [master/master/192.168.8.174:16020] regionserver.HRegionServer: stopping server master,16020,1426754759593; all regions closed.
So I can't understand what is the problem
my configuration files are
master machine - hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://192.168.8.174:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>hdfs://192.168.8.174:9002/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>192.168.8.174,192.168.8.173</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>
slave machine - hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://192.168.8.174:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
and I enabled HBASE_MANAGES_ZK
is true in hbase-env.sh
Upvotes: 2
Views: 21002
Reputation: 828
I got ERROR: Can't get master address from ZooKeeper; znode data == null
once before. In my case it was the configuration of the zookeeper.znode.parent
value. The value on the server was /hbase
but I could only connect if it is set to /hbase-unsecure
from the client. Had to edit that value on the server's zoo.cfg file for the client to connect to it.
Upvotes: 4
Reputation: 9770
From the log message, it looks like you may have an issue with name resolution.
I would make sure that your IP addresses properly resolve in both the forward and reverse direction to the same hostname. This is a common problem with HBase. In particular, I would check your /etc/hosts
file and make sure that the name master
is not associated with the IP address 192.168.8.174
. If it is, then you'll need to use the proper name in your configuration instead of IP addresses. Also, make sure the name mappings are the same on all machines in your cluster. There are tools for doing this check for you, for example:
https://github.com/sujee/hadoop-dns-checker
UPDATE: It looks like you may also have a bad setting for hbase.zookeeper.property.dataDir
. You currently have it pointed to an HDFS url, but I believe this is supposed to be a local directory path. See here for an example.
I would confirm that you can even talk to zookeeper from the command line using hbase zkcli
.
Upvotes: 0