amdixon
amdixon

Reputation: 3833

malloc conditions for failure

I'm brushing up on c, redoing some old exercizes and getting some unusual results when I run this snippet (I know its leaking but want to find out how much the system allows..)

#include <stdio.h>
#include <stdlib.h>

int main(int argc,char *argv[])
{
    void *page = 0; int index;
    index = 0;
    while(1)
    {
        page = malloc(1073741824); //1GB
        if(!page)break;
        ++index;
    }
    printf("memory failed at %d\n",index);
    return 0;
}

I'm actually getting:

memory failed at 131070

this indicates it thinks its allocated 131070 x 1GB memory (leaking generously)

I had previously understood malloc should fail before consuming all virtual memory and certainly if I try and malloc 20GB in one block this fails.

My setup: ubuntu 10 8Gb ram, <= 2Gb swap, HD 1TB (does this matter?)

anyone have an idea how it can leak more memory than I have

Upvotes: 4

Views: 591

Answers (1)

paulsm4
paulsm4

Reputation: 121669

  • http://www.win.tue.nl/~aeb/linux/lk/lk-9.html

    Since 2.1.27 there are a sysctl VM_OVERCOMMIT_MEMORY and proc file /proc/sys/vm/overcommit_memory with values 1: do overcommit, and 0 (default): don't. Unfortunately, this does not allow you to tell the kernel to be more careful, it only allows you to tell the kernel to be less careful. With overcommit_memory set to 1 every malloc() will succeed. When set to 0 the old heuristics are used, the kernel still overcommits.

You might also wish to look at instrumenting with mallinfo:

One final link:

In a way, Linux allocates memory the way an airline sells plane tickets. An airline will sell more tickets than they have actual seats, in the hopes that some of the passengers don't show up. Memory in Linux is managed in a similar way, but actually to a much more serious degree.

Upvotes: 4

Related Questions