user0002128
user0002128

Reputation: 2921

On modern 64bit system, what could cause memory allocation fails?

Assuming there are sufficient virtual memory address in the process.

Considering 64 bit system has almost infinite virtual address, and if there are still available phyiscal memory space in the OS memory pool, can we assume there is zero chance of failure of memory allocation?

Upvotes: 4

Views: 520

Answers (1)

It depends. You could limit (e.g. with setrlimit(2) on Linux) a process to avoid using all the resources, and there are good reasons to put such limits (e.g. avoid a buggy program to eat all resources, to leave some to other more important processes).

Hence, a well behaved program should always test memory allocation (e.g. malloc(3) or operator new both often based on lower-level syscalls like mmap(2) ...). And of course the resources are not infinite (at most physical RAM + swap space).

Often, the only thing to do on memory exhaustion is to abort the program with a nice message (understandable by sysadmins). Doing more fancy things is much more difficult but possible (and required in a server program, because you want to continue serving other requests...).

Hence, you should write in C:

 void* p = malloc(somesize);
 if (!p) { perror("malloc"); exit(EXIT_FAILURE); };

You could use _exit or abort if you are scared of terminators registered thru atexit(3) doing malloc ... but I won't bother.

Often, a routine doing the above is called xmalloc for historical reasons.

And in C++ operator new may fail by throwing an std::bad_alloc exception (or by giving nullptr if you use new(std::nothrow), see std::nothrow for more).

Learn more about memory overcommit on Linux, virtual memory, and as Joachim Pileborg commented, memory fragmentation. Read about garbage collection.

Upvotes: 6

Related Questions