Reputation: 526
Following C code example (taken from http://bugs.python.org/issue19246) executed on Windows 7 64-bit, while compiled in 32-bit mode
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
int create_huge_linked_list(void **last_item) {
int i;
void *prev_item = NULL;
for(i = sizeof(void *); i < 1000000; i++) {
void *new_item = malloc(i);
if(new_item == NULL) {
break;
}
*(void **)new_item = prev_item;
prev_item = new_item;
}
*last_item = prev_item;
return i;
}
void free_linked_list(void *last_item) {
while(last_item != NULL) {
void *prev_item = *(void **)last_item;
free(last_item);
last_item = prev_item;
}
}
int stress_heap() {
void *last_item;
int amount = create_huge_linked_list(&last_item);
free_linked_list(last_item);
return amount;
}
void stress_twice(void) {
int first = stress_heap();
int second = stress_heap();
printf("%i %i %f%%\n", first, second, 100.0 * second / first);
}
void stress_and_alloc_1_mb() {
void *ptr;
ptr = malloc(1000000);
if(ptr != NULL) {
printf("Successfully allocated 1 MB before stress\n");
free(ptr);
stress_heap();
ptr = malloc(1000000);
if(ptr != NULL) {
printf("Successfully allocated 1 MB after stress\n");
free(ptr);
} else {
printf("Failed to allocate 1 MB after stress\n");
}
} else {
printf("Failed to allocate 1 MB before stress\n");
}
}
int main() {
stress_and_alloc_1_mb();
stress_twice();
return 0;
}
Outputs:
Successfully allocated 1 MB before stress
Failed to allocate 1 MB after stress
64855 64857 100.003084%
Result may be interpreted as: after whole memory allocation and then freeing it, memory of the process is fragmented so bad, that there is no chunk with length of 1 MB. However, the stress procedure can be repeated continuously without memory errors.
The questions are:
Upvotes: 0
Views: 116
Reputation: 117661
This is kind of a funny question, because we never really speak of fragmented in-use memory (let's ignore cache locality for now). Memory fragmentation becomes an issue when said memory is available to be distributed again, but previous memory allocations splitting the memory pool into smaller chunks requires joining chunks together.
I think the reason you ask this question is because you're used to the example of the worst case, where no such continuous memory block exists without moving anything. But even in non-worst cases it can be a very difficult problem detecting a potential de-fragmentation operation, causing the allocator to choke on instances of the problem where a human can easily see a solution.
But to answer your question, consider this allocated chunk of memory:
aaaabbbbcccc
Now a
and c
get freed:
....bbbb....
And now b
gets freed:
....____....
Now we have three continous seperate chunks of memory in the memory pool - it requires special code to detect this, and apparently the allocator chokes on this.
The answer to this is simple: defragment the memory pool. How easy / hard this problem is depends on what kind of allocator is used and how much bookkeeping it does.
Upvotes: 3