Reputation: 1
I received the below error in zookeeper when I log in my cluster environment. I am using the default zookeeper which comes along with hbase.
HBase is able to connect to ZooKeeper but the connection closes immediately.
This could be a sign that the server has too many connections (30 is the default)
Consider inspecting your ZK server logs for that error and then make sure you
are reusing HBase Configuration as often as you can. See HTable's javadoc for
more information.
Upvotes: 0
Views: 1726
Reputation: 1397
You can also take a look at the hbase.regionserver.handler.count
http://hbase.apache.org/configuration.html#recommended_configurations
Upvotes: 0
Reputation: 34184
It seems to be a file handle issue to me. HBase uses a lot of files all at the same time. The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems is insufficient. Increasing the maximum number of file handles to a higher value say 10,000 or more might help. Please note that increasing the file handles for the user who is running the HBase process is an operating system configuration and not an HBase configuration.
If you are on Ubuntu you will need to make the following changes:
In the file /etc/security/limits.conf add the following line
hadoop - nofile 32768
Replace hadoop with whatever user is running Hadoop and HBase. If you have separate users, you will need 2 entries, one for each user. In the same file set nproc hard and soft limits. For example:
hadoop soft/hard nproc 32000
In the file /etc/pam.d/common-session add as the last line in the file:
session required pam_limits.so
Otherwise the changes in /etc/security/limits.conf won't be applied.
Don't forget to log out and back in again for the changes to take effect.
Reference : http://hbase.apache.org/book.html#basic.prerequisites
HTH
Upvotes: 1
Reputation: 6169
There can be many reasons.
Upvotes: 0