maxmouse
maxmouse

Reputation: 101

malloc() in C And Memory Usage

I was trying an experiment with malloc to see if I could allocate all the memory available.

I used the following simple program and have a few questions:

int main(void)
{
    char * ptr;
    int x = 100;

    while(1)
    {
        ptr = (char *) malloc(x++ * sizeof(char) / 2);
        printf("%p\n",ptr);
    }

    return 0;
}

1) Why is it that when using larger data types(int, unsigned long long int, long double) the process would use less memory but with smaller data types (int, char) it would use more?

2) When running the program, it would stop allocating memory after it reached a certain amount (~592mb on Windows 7 64-bit with 8GB RAM swap file set to system managed). The output of the print if showed 0 which means NULL. Why does it stop allocating memory after a reaching this threshold and not exhaust the system memory and swap?

I found someone in the following post trying the same thing as me, but the difference they were not seeing any difference in memory usage, but I am. Memory Leak Using malloc fails

I've tried the code on Linux kernel 2.6.32-5-686 with similar results.

Any help and explanation would be appreciated.

Thanks,

Upvotes: 4

Views: 6891

Answers (4)

iabdalkader
iabdalkader

Reputation: 17312

1)Usually memory is allocated in multiples of pages, so if the size you asked for is less than a page malloc will allocate at least one page.

2)This makes sense, because in a multitasking system, you're not the only user and your process is not the only process running, there are many other processes that share a limited set of resources, including memory. If the OS allowed one process to allocate all the memory it needs without any limitation, then it's not really a good OS, right ?

Finally, in Linux, the kernel doesn't allocate any physical memory pages until after you actually start using this memory, so just calling malloc doesn't actually consume any physical memory, other than what is required to keep track of the allocation itself of course. I'm not sure about Windows though.

Edit: The following example allocates 1GB of virtual memory

#include <stdio.h>
int main(int agrc, char **argv)
{
    void *p = malloc(1024*1024*1024);
    getc(stdin);
}

If you run top you get

top -p `pgrep test`
PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
20   0 1027m  328  252 S    0  0.0   0:00.00 test

If you change malloc to calloc, and run top again you get

top -p `pgrep test`
PR   NI VIRT  RES  SHR S %CPU %MEM    TIME+ COMMAND              
20   0  1027m 1.0g 328 S    0  1.3   0:00.08 test

Upvotes: 6

SmacL
SmacL

Reputation: 22922

1) When you allocate memory, each allocation takes the space of the requested memory plus the size of a heap frame. See a related question here

2) The size of any single malloc is limited in Windows to _HEAP_MAXREQ. See this question for more info and some workarounds.

Upvotes: 1

Joachim Isaksson
Joachim Isaksson

Reputation: 180867

How are you reading your memory usage?

1) When allocating with char, you're allocating less memory per allocation than you do with for example long (one quarter as much, usually, but it's machine dependent) Since most memory usage tools external to the program itself don't show allocated memory but actually used memory, it will only show the overhead malloc() itself uses instead of the unused memory you malloc'd.

More allocations, more overhead.

You should get a very different result if you fill the malloc'd block with data for each allocation so the memory is actually used.

2) I assume you're reading that from the same tool? Try counting how many bytes you actually allocate instead and it should be showing the correct amount instead of just "malloc overhead".

Upvotes: 1

Eregrith
Eregrith

Reputation: 4367

1) This could come from the fact that memory is paged and that every page has the same size. If your data fails to fit in a page and falls 'in-between' two pages, I think it is move to the beginning of the next page, thus creating a loss of space at the end of previous page.

2) The threshold is smaller because I think every program is restricted to a certain amount of data that is not the total maximum memory you have.

Upvotes: -1

Related Questions