Reputation: 6645
I'm trying to write a c program to test how much memory is on my system. I'm planning to run it under various different conditions:
I am doing this to learn more about how memory allocation behaves at the limits of the system's real and virtual memory.
I'm running this on a machine with 4GB RAM and 8GB Swap.
What I have currently is something like this:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int *ptr;
int mb = 1048576;
long long int i;
for (i = 0; i < 1000000000000; i++)
{
printf("Trying to allocate %lld MBytes\n", i * 10 * sizeof(int) );
ptr = (int*) calloc(10 * mb, sizeof(int));
if ( ptr == 0 ) {
// clean up
free(ptr);
printf("Ran out of memory\n");
return 1;
}
}
}
I was hoping that this would continue to allocate blocks of 40mb (sizeof(int) is 4 on my system). Calloc would initialize the memory to zero. When no more memory was available, it would terminate the program and free up the memory.
When I run it, it continues to run beyond the limits of my memory. It finally died while printing the line: "Trying to allocate 5707960 MBytes." (Indicating almost 6000 GB of memory.)
Can anybody figure out where I'm going wrong.
Thanks @Blank Xavier for pointing out that the page file size should be considered when allocating this way.
I modified the code as follows:
int main(void)
{
int *ptr;
int mb = 1048576;
int pg_sz = 4096;
long long int i;
for (i = 0; i < 1000000000000; i++)
{
printf("Trying to allocate %lld MBytes\n", i * pg_sz * sizeof(int) / mb );
ptr = (int*) calloc(pg_sz, sizeof(int));
if ( ptr == 0 ) {
// clean up
free(ptr);
printf("Ran out of memory\n");
return 1;
}
}
}
And now it bombs out printing:
"Trying to allocate 11800 MBytes"
which is what I expect with 4GB Ram and 8GB swap. By the way, it prints much more slowing after 4GB since it is swapping to disk.
Upvotes: 2
Views: 3935
Reputation: 882048
First off, some advice:
Please don't cast the return value from the allocation functions in C - that can mask certain errors that will almost certainly bite you at some point (such as not having the prototype defined in a system where pointers and integers are different widths).
The allocation functions return
void *
which is perfectly capable of being implicitly cast to any other pointer.
In terms of your actual question, I'd be starting with a massive allocation at the front, ensuring that it fails, at least on a system without certain optimisations (a).
In that case, you then gradually step down until the first one succeeds. That way, you don't have to worry about housekeeping information taking up too much space in the memory arena or the potential for memory fragmentation: you simply find the largest single allocation that succeeds immediately.
In other words, something like the following psuedo-code:
allocsz = 1024 * 1024 * 1024
ptr = allocate_and_zero (allocsz)
if ptr != NULL:
print "Need to up initial allocsz value from " + allocsz + "."
exit
while ptr == NULL:
allocsz = allocsz - 1024
ptr = allocate_and_zero (allocsz)
print "Managed to allocate " + allocsz + " bytes."
(a): As to why calloc
on your system may seem to return more memory than you have in swap, GNU libc will, above a certain threshold size, use mmap
rather than the heap, hence your allocation is not restricted to the swap file size (its backing storage is elsewhere). From the malloc
documentation under Linux:
Normally, malloc() allocates memory from the heap, and adjusts the size of the heap as required, using sbrk(2). When allocating blocks of memory larger than MMAP_THRESHOLD bytes, the glibc malloc() implementation allocates the memory as a private anonymous mapping using mmap(2).
This is a problem that won't be fixed by my solution above since it's initial massive allocation will also be above the threshold.
In terms of physical memory, since you're already using procfs
, you should probably just have a look inside /proc/meminfo
. MemTotal
should give you the physical memory available (which may not be the full 4G since some of that address space is stolen for other purposes).
For virtual memory allocations, keep the allocation size below the threshold so that they come out of the heap rather than a mmap
area. Another snippet from the Linux malloc
documentation:
MMAP_THRESHOLD is 128 kB by default, but is adjustable using mallopt(3).
So your choices are to use mallopt
to increase the threshold, or drop the 10M allocation size a bit (say 64K).
Upvotes: 3
Reputation:
You can't determine physical memory by a huge allocation and then reducing the allocation size until it succeeds.
This will only determine the size of the largest available virtual memory block.
You need to loop, allocating physical-page-sized (which will also be virtual-page-sized) blocks until the allocation fails. With such a simple program, you don't need to worry about keeping track of your allocations - the OS will return them when the programme exits.
Upvotes: 3
Reputation: 9354
Well, the program sort of does what you intend, but the huge number needs an LL after it, or it'll be "randomly" truncated by the compiler which will treat the number as an integer.
However, when you free the pointer, which is a null pointer, it won't actually do anything. It won't free all the memory you've allocated, as you have no pointers to it.
Moreover if you fall out of the main loop, what gets return from main is undefined.
Upvotes: 0
Reputation: 4384
You must free you allocated memory in case of success in your loop
for (i = 0; i < 1000000000000; i++)
{
printf("Trying to allocate %lld MBytes\n", i * 10 * sizeof(int) );
ptr = (int*) calloc(10 * mb, sizeof(int));
if ( ptr == 0 ) {
printf("Ran out of memory\n");
return 1;
}
else
free(ptr);
}
Upvotes: -1