Russell Newman
Russell Newman

Reputation: 83

Writing binary files (~12MB) consecutively very fast (30 per second) from a C++ program in Linux

I'm trying to write out raw video frames (12.4MB each) in realtime (30fps) to a CFast 2.0(ext4) card which is connected via a sata3(6Gb/sec) interface.

The card is rated at 430MB/sec and when I benchmark the drive with consecutive 100MB file writes, it happily reaches ~420MB/sec.

The problem is when I'm writing smaller files, ~12MB, the throughput drops to about 350MB/sec which becomes too slow for my purposes.

The file writing routine is relatively simple fopen based(pseudocode):

foreachframe()
{
    file = fopen(frame_filename)
    fwrite(file, img_header)
    fwrite(file, img_data)
    fclose(file)
}

I've tried both single threaded and multi threaded, but there is not much difference. I'm guessing there is some significant overhead for creating a new file and closing it. Currently the filesystem is ext4, although I'd like to get it working with exFat ultimately.

Is there a way to interact low level with the filesystem that would allow creating and filling large numbers files with a much lower overhead? Alternatively are there optimization tricks for batch-saving a large number of files to disk?

Upvotes: 3

Views: 269

Answers (0)

Related Questions