Laxmi Lal Menaria
Laxmi Lal Menaria

Reputation: 1445

Best way to use RandomAccessFile is Java

I am creating a utility which write MSSQL table blob to data disk file using RandomAccessFile. It is too slower because we need to seek the last position always and write the stream contents.. please let me know any other alternative to speedup the randomaccessfile writing.

I have more than 50M records, with current logic it took approx 10 hours.

my code block is something like that:

RandomAccessFile randomAccessFile = new RandomAccessFile(file, "rw");
InputStream inputStream = null;

while (rows.hasNext()) {
    Row row = rows.next();
    inputStream = (InputStream) row.getValues()[0];
    offset = randomAccessFile.length();
    byte[] buffer = new byte[8196];
    int count;
    randomAccessFile.seek(offset);
    randomAccessFile.setLength(offset);
    while ((count = inputStream.read(buffer)) != -1) {
        randomAccessFile.write(buffer, 0, count);
    }
}
randomAccessFile.close();   

Upvotes: 0

Views: 2198

Answers (2)

mtj
mtj

Reputation: 3554

According to the code you posted, you only need to append to an existing file. This is done easier and more efficient using a buffered writer in append mode.

Thus, use

BufferedWriter writer = Files.newBufferedWriter(file.toPath(), StandardOpenOptions.CREATE, StandardOpenOptions.APPEND);

instead.

Update after Peter's comment: for an output stream, the whole thing is basically the same, only that Files does not have a nice convenience function for the "buffered" part. Therfore:

OutputStream outputStream = new BufferedOutputStream(Files.newOutputStream(file.toPath(), StandardOpenOption.CREATE, StandardOpenOption.APPEND));

Upvotes: 2

shazin
shazin

Reputation: 21883

Currently you are writing roughly 8 Kb (8196 / 1024) of data in each iteration and each iteration does an I/O operation which is blocking and takes time. Try to increase this to at least 1 Mb roughly (10,000,000).

byte[] buffer = new byte[10000000];

Upvotes: 0

Related Questions