Reputation:
When programming in C++, as a rule of thumb, what is the cut-off point where instead of ...
char array[MAX_PATH + 1] = {0};
... one would use:
char *array = new char[MAX_PATH + 1];
// ...
delete[] array;
At what point does one act to preserve space on the stack?
20 years ago I was taught you should allocate ALL arrays over 32 bytes on the heap, regardless of the performance costs, and save the stack for simple variables. I have seen a lot of modern example code use the stack pretty profligately, so has this thinking changed?
Upvotes: 13
Views: 952
Reputation: 21607
The answer to your question would depend upon your system. You only need to be concerned with stack allocations in recursive programming.
On a modern system you could easily do:
char array [123456];
but you would not to do that in a function that calls itself repeatedly through recursion.
Upvotes: 4
Reputation: 3146
The benefits of stack allocation are that you do not have to check for failure, you do not have to worry about memory fragmentation, and the memory is automatically freed when the function returns. The drawback is that you cannot check for success and generally something bad happens when it fails.
Without virtual memory (e.g., on small microcontrollers), generally there is less memory available. If your program allocates too much space on the stack, it will just overflow the stack and write on whatever is just beyond the space reserved for the stack. You can set aside more stack space to deal with the worst case, but then that space is dedicated to the stack all the time whether you need it or not. Whereas with dynamic allocation, you only pay for it when you are using it, and you can check the return code of malloc
to determine whether a failure occurred (or catch exceptions on new
for people trying to squeeze C++ into a microcontroller). Alternatively, you might statically reserve a section of RAM for the object, if the application always needs it, because that way it has more predictable run-time behavior and you do not have to worry about fragmentation.
With virtual memory, the OS can grow the stack dynamically by adding pages to contiguous virtual addresses at the end of the stack. That way, you only pay physical memory for the amount stack you are using. However, with a small (32-bit bits or less?) address space, you can still run into trouble if you have lots of threads and too much stack allocation.
Each thread has to have contiguous space for its stack and has to share with everything else in the system. For example, if the OS takes 1 GB of the address space (not uncommon), and you want 1 GB for thread stacks, that leaves you with 2 GB for heap, data, and code. If you have 16 threads, that means you can have about 64 MB per thread stack. Which might or might not be OK depending on your application.
If you have more threads or bigger stack allocated objects, you have to adjust thread stack sizes and heap size to get the right balance. If you are wrong, your program segmentation faults on stack overflow. Thus the advice to make all the "big" allocations on the heap, because of the more predictable run-time behavior.
20 years ago a 4 GB address space was a lot of space for heap, data, code, and stack -- not so much today. Fortunately, address spaces have gotten bigger, too. With 64-bits of address space, you will run out of DRAM long before you can exhaust physical memory -- even if you have over 1 TB of DRAM.
Several caveats
The run-time often imposes an arbitrary limit on stack size catch run-away recursion bugs. If you use substantially more stack than the run-time system expects, you may have to tell it to give you a bigger stack. Usually, the default size is big enough and it is usually not hard to change.
The OS/run-time is not as good about giving back stack space as it is at taking it. As your program takes more stack space it will get new pages. Those pages will most likely not be freed until the thread exits (at least that used to be the case). If you are using the stack space, that is not a big deal. If it is only rarely used and the thread runs a long time, the memory will get swapped to disk if you have backing store (usually not a problem for a desktop/laptop with spinning drive, but maybe not so good for iOS8 or Android 64-bit).
If you expect your code to run on a system with a large physical and virtual memory space compared to the objects you plan to stack allocate, stack allocation is probably fine. In most cases, the benefits will outweigh the costs.
Upvotes: 8
Reputation: 460
The linker sets the stack size. Any static allocation will be handled up to 1 GB on windows(may be different on other systems). However the default stack size on windows is 1MB (8MB on Linux). So the main benefit of dynamic allocation is that you don't know the size before hand or if you have some very large allocations needed for your program.
Upvotes: 4