Reputation: 3997
I'm trying to find out if I'm willing to go through the trouble of solving a problem that captured my interest. In order to solve the problem, I will need to create a binary file of more than one TB. If I'm very lucky, this can be reduced to around 300GB, which would be the very minimum of memory required to solve my problem. Obviously, the data can't be kept in RAM, so I have to figure out how and when to write to (and read from) disk.
I guess the usual ways of writing to disk are out of the question. An fstream
will fail miserably when it tries to load the file into memory. Does anyone here know about (preferably portable) techniques that can read and write directly to disk without first caching (parts of) the data? It also occurred to me that I might not even need a filesystem to do this. Can't I just flip bits on the disk directly (without using emacs' M-x butterfly
)?
Upvotes: 0
Views: 70
Reputation: 210603
I think a memory-mapped file (on a 64-bit system) is exactly what you need.
You simply map the whole file into memory and manipulate it just like a normal data structure; the operating system takes care of loading and unloading the data to/from memory.
It's not standard C++, but there is a Boost implementation that should work on major platforms.
Upvotes: 3