Reputation: 37526
I haven't had much success figuring out what this error message means. I'm also very new to HDFS and HBase so that's part of the problem. Aside from the possibility that the HDFS server is running out of space, what could likely be causing this error:
2014-06-13 12:55:33,164 WARN org.apache.hadoop.hbase.regionserver.wal.HLogSplitter:
Could not open hdfs://<OURSERVER>:8020/hbase/.logs/<HBASE_BOX>,60020,1402678303659-splitting/<HBASE_BOX>m%2C60020%2C1402678303659.1402678319050 for reading. File is empty
java.io.EOFException
at java.io.DataInputStream.readFully(Unknown Source)
at java.io.DataInputStream.readFully(Unknown Source)
at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1800)
at org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1765)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1714)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1728)
at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:55)
at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:178)
at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:745)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:855)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:768)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:412)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:380)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:115)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:283)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:214)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:182)
at java.lang.Thread.run(Unknown Source)
Upvotes: 0
Views: 288
Reputation: 37526
The issue was a lack of disk space on that particular HDFS node.
Upvotes: 0
Reputation: 25929
You can check the state of HDFS (and fix errors) via fsck
see http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/CommandsManual.html#fsck
Once that done you can check HBase's state with hbck
see http://hbase.apache.org/book/hbck.in.depth.html
Upvotes: 1