Reputation: 2011
RandomAccessFile is quite slow for random access to a file. You often read about implementing a buffered layer over it, but code doing this isn't possible to find online.
So my question is: would you guys who know any opensource implementation of this class share a pointer or share your own implementation?
It would be nice if this question would turn out as a collection of useful links and code about this problem, which I'm sure, is shared by many and never addressed properly by SUN.
Please, no reference to MemoryMapping, as files can be way bigger than Integer.MAX_VALUE.
Upvotes: 26
Views: 29896
Reputation: 3461
Apache PDFBox project has a nice and tested BufferedRandomAccessFile
class.
Licensed under the Apache License, Version 2.0
It is an optimized version of the java.io.RandomAccessFile class as described by Nick Zhang on JavaWorld.com. Based on jmzreader implementation and augmented to handle unsigned bytes.
The source code is here:
UPDATE 2024.01.24:
On May 2022 in a1ea618, BufferedRandomAccessFile
was replaced by RandomAccessReadBufferedFile
in the PDFBox project (PDFBOX-5434).
Same thing, a somewhat different implementation. See the source code here:
Upvotes: 4
Reputation: 1005
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.RandomAccessFile;
/**
* Adds caching to a random access file.
*
* Rather than directly writing down to disk or to the system which seems to be
* what random access file/file channel do, add a small buffer and write/read from
* it when possible. A single buffer is created, which means reads or writes near
* each other will have a speed up. Read/writes that are not within the cache block
* will not be speed up.
*
*
*/
public class BufferedRandomAccessFile implements AutoCloseable {
private static final int DEFAULT_BUFSIZE = 4096;
/**
* The wrapped random access file, we will hold a cache around it.
*/
private final RandomAccessFile raf;
/**
* The size of the buffer
*/
private final int bufsize;
/**
* The buffer.
*/
private final byte buf[];
/**
* Current position in the file.
*/
private long pos = 0;
/**
* When the buffer has been read, this tells us where in the file the buffer
* starts at.
*/
private long bufBlockStart = Long.MAX_VALUE;
// Must be updated on write to the file
private long actualFileLength = -1;
boolean changeMadeToBuffer = false;
// Must be update as we write to the buffer.
private long virtualFileLength = -1;
public BufferedRandomAccessFile(File name, String mode) throws FileNotFoundException {
this(name, mode, DEFAULT_BUFSIZE);
}
/**
*
* @param file
* @param mode how to open the random access file.
* @param b size of the buffer
* @throws FileNotFoundException
*/
public BufferedRandomAccessFile(File file, String mode, int b) throws FileNotFoundException {
this(new RandomAccessFile(file, mode), b);
}
public BufferedRandomAccessFile(RandomAccessFile raf) throws FileNotFoundException {
this(raf, DEFAULT_BUFSIZE);
}
public BufferedRandomAccessFile(RandomAccessFile raf, int b) {
this.raf = raf;
try {
this.actualFileLength = raf.length();
} catch (IOException e) {
throw new RuntimeException(e);
}
this.virtualFileLength = actualFileLength;
this.bufsize = b;
this.buf = new byte[bufsize];
}
/**
* Sets the position of the byte at which the next read/write should occur.
*
* @param pos
* @throws IOException
*/
public void seek(long pos) throws IOException{
this.pos = pos;
}
/**
* Sets the length of the file.
*/
public void setLength(long fileLength) throws IOException {
this.raf.setLength(fileLength);
if(fileLength < virtualFileLength) {
virtualFileLength = fileLength;
}
}
/**
* Writes the entire buffer to disk, if needed.
*/
private void writeBufferToDisk() throws IOException {
if(!changeMadeToBuffer) return;
int amountOfBufferToWrite = (int) Math.min((long) bufsize, virtualFileLength - bufBlockStart);
if(amountOfBufferToWrite > 0) {
raf.seek(bufBlockStart);
raf.write(buf, 0, amountOfBufferToWrite);
this.actualFileLength = virtualFileLength;
}
changeMadeToBuffer = false;
}
/**
* Flush the buffer to disk and force a sync.
*/
public void flush() throws IOException {
writeBufferToDisk();
this.raf.getChannel().force(false);
}
/**
* Based on pos, ensures that the buffer is one that contains pos
*
* After this call it will be safe to write to the buffer to update the byte at pos,
* if this returns true reading of the byte at pos will be valid as a previous write
* or set length has caused the file to be large enough to have a byte at pos.
*
* @return true if the buffer contains any data that may be read. Data may be read so long as
* a write or the file has been set to a length that us greater than the current position.
*/
private boolean readyBuffer() throws IOException {
boolean isPosOutSideOfBuffer = pos < bufBlockStart || bufBlockStart + bufsize <= pos;
if (isPosOutSideOfBuffer) {
writeBufferToDisk();
// The buffer is always positioned to start at a multiple of a bufsize offset.
// e.g. for a buf size of 4 the starting positions of buffers can be at 0, 4, 8, 12..
// Work out where the buffer block should start for the given position.
long bufferBlockStart = (pos / bufsize) * bufsize;
assert bufferBlockStart >= 0;
// If the file is large enough, read it into the buffer.
// if the file is not large enough we have nothing to read into the buffer,
// In both cases the buffer will be ready to have writes made to it.
if(bufferBlockStart < actualFileLength) {
raf.seek(bufferBlockStart);
raf.read(buf);
}
bufBlockStart = bufferBlockStart;
}
return pos < virtualFileLength;
}
/**
* Reads a byte from the file, returning an integer of 0-255, or -1 if it has reached the end of the file.
*
* @return
* @throws IOException
*/
public int read() throws IOException {
if(readyBuffer() == false) {
return -1;
}
try {
return (buf[(int)(pos - bufBlockStart)]) & 0x000000ff ;
} finally {
pos++;
}
}
/**
* Write a single byte to the file.
*
* @param b
* @throws IOException
*/
public void write(byte b) throws IOException {
readyBuffer(); // ignore result we don't care.
buf[(int)(pos - bufBlockStart)] = b;
changeMadeToBuffer = true;
pos++;
if(pos > virtualFileLength) {
virtualFileLength = pos;
}
}
/**
* Write all given bytes to the random access file at the current possition.
*
*/
public void write(byte[] bytes) throws IOException {
int writen = 0;
int bytesToWrite = bytes.length;
{
readyBuffer();
int startPositionInBuffer = (int)(pos - bufBlockStart);
int lengthToWriteToBuffer = Math.min(bytesToWrite - writen, bufsize - startPositionInBuffer);
assert startPositionInBuffer + lengthToWriteToBuffer <= bufsize;
System.arraycopy(bytes, writen,
buf, startPositionInBuffer,
lengthToWriteToBuffer);
pos += lengthToWriteToBuffer;
if(pos > virtualFileLength) {
virtualFileLength = pos;
}
writen += lengthToWriteToBuffer;
this.changeMadeToBuffer = true;
}
// Just write the rest to the random access file
if(writen < bytesToWrite) {
writeBufferToDisk();
int toWrite = bytesToWrite - writen;
raf.write(bytes, writen, toWrite);
pos += toWrite;
if(pos > virtualFileLength) {
virtualFileLength = pos;
actualFileLength = virtualFileLength;
}
}
}
/**
* Read up to to the size of bytes,
*
* @return the number of bytes read.
*/
public int read(byte[] bytes) throws IOException {
int read = 0;
int bytesToRead = bytes.length;
while(read < bytesToRead) {
//First see if we need to fill the cache
if(readyBuffer() == false) {
//No more to read;
return read;
}
//Now read as much as we can (or need from cache and place it
//in the given byte[]
int startPositionInBuffer = (int)(pos - bufBlockStart);
int lengthToReadFromBuffer = Math.min(bytesToRead - read, bufsize - startPositionInBuffer);
System.arraycopy(buf, startPositionInBuffer, bytes, read, lengthToReadFromBuffer);
pos += lengthToReadFromBuffer;
read += lengthToReadFromBuffer;
}
return read;
}
public void close() throws IOException {
try {
this.writeBufferToDisk();
} finally {
raf.close();
}
}
/**
* Gets the length of the file.
*
* @return
* @throws IOException
*/
public long length() throws IOException{
return virtualFileLength;
}
}
Upvotes: 0
Reputation: 21
RandomAccessFile is quite slow for random access to a file. You often read about implementing a buffered layer over it, but code doing this isn't possible to find online.
Well, it is possible to find online.
For one, the JAI source code in jpeg2000 has an implementation, as well as an even more non-encumbered impl at:
http://www.unidata.ucar.edu/software/netcdf-java/
javadocs:
Upvotes: 2
Reputation: 78579
Well, I do not see a reason not to use java.nio.MappedByteBuffer even if the files are bigger the Integer.MAX_VALUE.
Evidently you will not be allowed to define a single MappedByteBuffer for the whole file. But you could have several MappedByteBuffers accessing different regions of the file.
The definition of position and size in FileChannenel.map are of type long, which implies you can provide values over Integer.MAX_VALUE, the only thing you have to take care of is that the size of your buffer will not be bigger than Integer.MAX_VALUE.
Therefore, you could define several maps like this:
buffer[0] = fileChannel.map(FileChannel.MapMode.READ_WRITE,0,2147483647L);
buffer[1] = fileChannel.map(FileChannel.MapMode.READ_WRITE,2147483647L, Integer.MAX_VALUE);
buffer[2] = fileChannel.map(FileChannel.MapMode.READ_WRITE, 4294967294L, Integer.MAX_VALUE);
...
In summary, the size cannot be bigger than Integer.MAX_VALUE, but the start position can be anywhere in your file.
In the Book Java NIO, the author Ron Hitchens states:
Accessing a file through the memory-mapping mechanism can be far more efficient than reading or writing data by conventional means, even when using channels. No explicit system calls need to be made, which can be time-consuming. More importantly, the virtual memory system of the operating system automatically caches memory pages. These pages will be cached using system memory andwill not consume space from the JVM's memory heap.
Once a memory page has been made valid (brought in from disk), it can be accessed again at full hardware speed without the need to make another system call to get the data. Large, structured files that contain indexes or other sections that are referenced or updated frequently can benefit tremendously from memory mapping. When combined with file locking to protect critical sections and control transactional atomicity, you begin to see how memory mapped buffers can be put to good use.
I really doubt that you will find a third-party API doing something better than that. Perhaps you may find an API written on top of this architecture to simplify the work.
Don't you think that this approach ought to work for you?
Upvotes: 13
Reputation: 25150
You can make a BufferedInputStream from a RandomAccessFile with code like,
RandomAccessFile raf = ...
FileInputStream fis = new FileInputStream(raf.getFD());
BufferedInputStream bis = new BufferedInputStream(fis);
Some things to note
Probably the way you want to use this would be something like,
RandomAccessFile raf = ...
FileInputStream fis = new FileInputStream(raf.getFD());
BufferedInputStream bis = new BufferedInputStream(fis);
//do some reads with buffer
bis.read(...);
bis.read(...);
//seek to a a different section of the file, so discard the previous buffer
raf.seek(...);
bis = new BufferedInputStream(fis);
bis.read(...);
bis.read(...);
Upvotes: 15
Reputation: 2348
If you're running on a 64-bit machine, then memory-mapped files are your best approach. Simply map the entire file into an array of equal-sized buffers, then pick a buffer for each record as needed (ie, edalorzo's answer, however you want overlapping buffers so that you don't have records that span boundaries).
If you're running on a 32-bit JVM, then you're stuck with RandomAccessFile
. However, you can use it to read a byte[]
that contains your entire record, then use a ByteBuffer
to retrieve individual values from that array. At worst you should need to make two file accesses: one to retrieve the position/size of the record, and one to retrieve the record itself.
However, be aware that you can start stressing the garbage collector if you create lots of byte[]
s, and you'll remain IO-bound if you bounce all over the file.
Upvotes: 1