nam
nam

Reputation: 3632

FSReadError in Cassandra

I have inserted massively data into 2 nodes cassandra server. After 2 days I've found that the server went down with this error, And I can't guess the problem

FSReadError in /var/lib/cassandra/data/system/hints/system-hints-jb-1090-Data.db
        at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:95)
        at org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:280)
        at org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:41)
        at org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1163)
        at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:362)
        at org.apache.cassandra.db.columniterator.IndexedSliceReader$IndexedBlockFetcher.fetchMoreData(IndexedSliceReader.java:332)
        at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:145)
        at org.apache.cassandra.db.columniterator.IndexedSliceReader.computeNext(IndexedSliceReader.java:45)
        at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
        at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
        at org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
        at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
        at org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
        at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
        at org.apache.cassandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:87)
        at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
        at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
        at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
        at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
        at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:294)
        at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
        at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1468)
        at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1294)
        at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:346)
        at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:304)
        at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:92)
        at org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:525)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.lang.Thread.run(Unknown Source)
Caused by: java.nio.channels.ClosedChannelException
        at sun.nio.ch.FileChannelImpl.ensureOpen(Unknown Source)
        at sun.nio.ch.FileChannelImpl.position(Unknown Source)
        at org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:101)
        at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:87)
        ... 29 more

Thanks for the anwser

Upvotes: 0

Views: 637

Answers (1)

Nikhil
Nikhil

Reputation: 2308

My hunch: you have a bad disk or your disk space ran out. You could confirm by running some disk check tools on your nodes?

Upvotes: 1

Related Questions