Reputation: 96197
I need to grab large amounts (4-8Gb) of data in realtime - without dropping any data.
The old system could just about keep up with writing the data to a striped RAID array but the data has got bigger, faster than the disk have got faster (!) So I don't have time to access the disk.
The new plan is to switch to Win64, install LOTS of ram, stuff the incoming data into a buffer and then write it all at the end.
So I'm looking for:
A windows API that limits new[] to physical memory and locks pages into physical ram, or I just disable the pagefile.
Or I use memory mapped files and force a sync at the end when I close the file. Is there a memory mapped file flag that prevents a write behind until I am ready?
Upvotes: 3
Views: 2092
Reputation: 30255
Instead of using plain old VirtualAlloc
and locking the pages yourself, you could call VirtualAlloc with the MEM_LARGE_PAGES
parameter. You need to set some settings beforehand though: See here
Large pages are by default non pageable and obviously more efficient wrt overhead. Beware that allocation times for large pages can be problematic on fragmented heaps. You may want to read this as well.
Upvotes: 1
Reputation: 29480
What you'll have to do is overload operator new and allocate and lock that memory yourself.
Upvotes: 5
Reputation: 968
One quick way would be to disable the system paging file system wide. You can create a special heap for your data, that uses physical memory only, but the new/delete stuff that takes care of memory management for small bits of data usually use the process heap. Use the HeapCreate() function from the win api. Now you must get new/delete to use that heap.
Upvotes: 0