Reputation: 2031
in my program I create a large (~10 million elements) list of objects, where each object is about 500 byte large. Currently the allocation is like this:
const int N = 10000000;
object_type ** list = malloc( N * sizeof * list );
for (int i=0; i < N; i++)
list[i] = malloc( sizeof * list[i]);
This works OK - but I have discovered that with the high number of small allocations a significant part of the run time goes to the malloc() and subsequent free() calls. I am therefor about to change the implementation to allocate larger chunks. The simplest for me would be to allocate everything as one large chunk.
Now I know there is at least one level of virtualization between the user space memory model and actual physical memory, but is there still a risk that I will get problems getting a so large 'continous' block of memory?
Upvotes: 0
Views: 74
Reputation: 294197
Contiguous virtual does not imply contiguous physical. If your process can allocate N pages individually, it will also be able to allocate them all in one call (and is actually better from many points of view to do it in one call). On old 32bit architectures the limited size of the virtual memory address space was a problem, but on 64 bit is no longer and issue. Besides, even on 32 bit, if you could allocate 10MM individually you should be able to allocate same 10MM in one single call.
That being said, your probably need to carefully revisit your design and reconsider why do you need to allocate 10MM elements in memory.
Upvotes: 1