ChiseledAbs
ChiseledAbs

Reputation: 2061

Too big nested arrays causes segmentation fault (core dumped)

I'm running into a problem, I need to do some big data crunching and creating too big arrays seems to cause Segmentation fault (core dumped). Here is a replication of the problem :

int main() {
struct { char a[2000][12] } b[2000];
return 0; }

I'm using Archlinux 64 bits, cc as a compiler, ulimit -s returns 8192 which is strange since I have 24GB of RAM. Any idea how to fix the problem ? I think it has to do with stack and heap but I have no idea what those are.

Upvotes: 1

Views: 631

Answers (3)

Kirill Bulygin
Kirill Bulygin

Reputation: 3826

Basically you are trying to allocate 2000*12*2000/1024 = 46875 KB on the stack but allow to use only 8192 KB. A quick fix is to set ulimit -s 50000.

In short about stack and heap: stack is a private memory of each function call (that's where the contents of function variables reside, i.e. scalar values, addresses and so on), and heap is a public memory with generally less strict limits (see e.g. malloc(3)).

Upvotes: 1

speed488
speed488

Reputation: 306

Your using stack memory by declaring your array with a local variable. Use your heap with memory allocation:

typedef struct {
    char a[2000][12];
} bigArray;

int main(void) {
    bigArray *array;
    array = (bigArray*)malloc(2000 * sizeof(bigArray));

    // Do stuff with array here

    free(array);
    return 0;
}

Upvotes: 1

P.P
P.P

Reputation: 121357

ulimit -s

doesn't return the total RAM size. It returns just the available stack size that the current shell has (and all the processes it can create). So, the available RAM size doesn't matter.

You can increase it using ulimit -s unlimited. But I'd suggest using dynamic memory allocation for such large arrays instead as your array size is ~48MB and you can easily run into trouble if that's not unavailable on stack, mainly because "stack allocation" failure is difficult to detect.

Upvotes: 1

Related Questions