Dimaf
Dimaf

Reputation: 683

The error of sstableloader

I try to load data to a Cassandra cluster using the sstableloader. The sstableloader shows the following error:

Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
        at org.apache.cassandra.io.compress.BufferType$1.allocate(BufferType.java:28)
        at org.apache.cassandra.io.util.RandomAccessReader.allocateBuffer(RandomAccessReader.java:69)
        at org.apache.cassandra.io.util.RandomAccessReader.<init>(RandomAccessReader.java:62)
        at org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:103)
        at org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:92)
        at org.apache.cassandra.io.util.RandomAccessReader.open(RandomAccessReader.java:87)
        at org.apache.cassandra.io.util.BufferedSegmentedFile.getSegment(BufferedSegmentedFile.java:60)
        at org.apache.cassandra.io.util.SegmentedFile$SegmentIterator.next(SegmentedFile.java:271)
        at org.apache.cassandra.io.util.SegmentedFile$SegmentIterator.next(SegmentedFile.java:252)
        at org.apache.cassandra.io.sstable.format.big.BigTableReader.getPosition(BigTableReader.java:184)
        at org.apache.cassandra.io.sstable.format.SSTableReader.getPosition(SSTableReader.java:1558)
        at org.apache.cassandra.io.sstable.format.SSTableReader.getPositionsForRanges(SSTableReader.java:1489)
        at org.apache.cassandra.io.sstable.SSTableLoader$1.accept(SSTableLoader.java:128)
        at java.io.File.list(File.java:1161)
        at org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:79)
        at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:161)
        at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:97)

If i understand the situation It needs to increase MAX_HEAP_SIZE. How can I do it for sstableloader?

It sounds not very optimistic - "/usr/bin/sstableloader still has a hard coded -Mx256M that can't be easily overridden". https://issues.apache.org/jira/browse/CASSANDRA-7385

Thanks.

I edited bin/sstableloader to set MAX_HEAP_SIZE="16GB". After it sstatableloader works.

Upvotes: 0

Views: 1376

Answers (2)

Liang Zhao
Liang Zhao

Reputation: 65

The sstableloader.sh contains the following line in code

if [ "x$MAX_HEAP_SIZE" = "x" ];
   ...

Thus, put MAX_HEAP_SIZE in front of your sstableloader command would set environment variable temporary and been recognised by sstableloader:

MAX_HEAP_SIZE="16G" sstableloader -d node /file/path

Upvotes: 2

Dimaf
Dimaf

Reputation: 683

For Cassandra v.2.2.4 you may edit sstableloader file, using vi, nano etc., to set MAX_HEAP_SIZE="16GB" or:

sed -i -e 's/MAX_HEAP_SIZE="256M"/MAX_HEAP_SIZE="16G"/g' sstableloader

Upvotes: 0

Related Questions