Reputation: 1513
Suppose I have the following line in my code:
struct info *pinfo = malloc(sizeof(struct info));
Usually there is another line of code like this one:
if (!pinfo)
<handle this error>
But does it really worth it? especially if the object is so small that the code generated to check it might need more memory than the object itself.
Upvotes: 2
Views: 887
Reputation: 47923
It's true that running out of memory is rare, especially for little test programs that are only allocating tens of bytes of memory, especially on modern systems that have many gigabytes of memory available.
Yet malloc
failures are very common, especially for little test programs.
malloc
can fail for two reasons:
malloc
detects that the memory-allocation heap is messed up, perhaps because you did something wrong with one of your previous memory allocations.Now, it turns out that #2 happens all the time.
And, it turns out that #1 is pretty common, too, although not because there's not enough memory to satisfy the allocation the programmer meant to do, but because the programmer accidentally passed a preposterously huge number to malloc
, accidentally asking for more memory than there is in the known universe.
So, yes, it turns out that checking for malloc
failure is a really good idea, even though it seems like malloc
"can't fail".
The other thing to think about is, what if you take the shortcut and don't check for malloc
failure? If you sail along and use the null pointer that malloc
gave you instead, that'll cause your program to immediately crash, and that'll alert you to your problem just as well as an "out of memory" message would have, without your having to wear your fingers to the bone typing if(!pinfo)
and fprintf(stderr, "out of memory\n")
, right?
Well, no.
Depending on what your program accidentally does with the null pointer, it's possible it won't crash right away. Anyway, the crash you get, with a message like "Segmentation violation - core dumped" doesn't tell you much, doesn't tell you where your problem is. You can get segmentation violations for all sorts of reasons (especially in little test programs, especially if you're a beginner not quite sure what you're doing). You can spend hours in a futile effort to figure out why your program is crashing, without realizing it's because malloc
is returning a null pointer. So, definitely, you should always check for malloc failure, even in the tiniest test programs.
Deciding which errors to test for, versus those that "can't happen" or for whatever reason aren't worth catching, is a hard problem in general. It can take a fair amount of experience to know what is and isn't worth checking for. But, truly, anybody who's programmed in C for very long can tell you emphatically: malloc
failure is definitely worth checking for.
If your program is calling malloc
all over the place, checking each and every call can be a real nuisance. So a popular strategy is to use a malloc
wrapper:
void *my_malloc(size_t n)
{
void *ret = malloc(n);
if(ret == NULL) {
fprintf(stderr, "malloc failed (%s)\n", strerror(errno));
exit(1);
}
return ret;
}
There are three ways of thinking about this function:
malloc
failure), see if you can move it off to (centralize it in) a single function, like this.malloc
, my_malloc
can't fail. It never returns a null pointer. It's almost magic. You can call it whenever and wherever you want, and you never have to check its return value. It lets you pretend that you never have to worry about running out of memory (which was sort of the goal all along).my_malloc
's benefit — that it never seems to fail — comes at a price. If the underlying malloc
fails, my_malloc
summarily exits (since it can't return in that case), meaning that the rest of your program doesn't get a chance to clean up. If the program were, say, a text editor, and whenever it had a little error it printed "out of memory" and then basically threw away the file the user had been editing for the last hour, the user might not be too pleased. So you can't use the simple my_malloc
trick in production programs that might lose data. But it's a huge convenience for programs that don't have to worry about that sort of thing.Upvotes: 6
Reputation: 67476
But what if the malloc fails? You will dereference the NULL pointer, which is UB (undefined behaviour) and your program will (probably) fail!
Sometimes code which checks the correctness of the data is longer than the code which does something with it :).
Upvotes: 1
Reputation: 25276
If you just want to quickly test some algorithm, then fine, but know it can fail. For example run it in the debugger.
When you include it in your Real World Program, then add all the error checking and handling needed.
Upvotes: 0
Reputation: 41222
This is very simply, if you won't check for NULL
you might end up with runtime error. Checking for NULL
will help you to avoid program from unexpected crash and gracefully handle the error case.
Upvotes: 0
Reputation: 36463
If malloc
fails then chances are the system is out of memory or it's something else your program can't handle. It should abort immediately and at most log some diagnostics. Not handling NULL
from malloc
will make you end up in undefined behavior land. One might argue that having to abort because of a failure of malloc
is already catastrophic but just letting it exhibit UB falls under a worse category.
Upvotes: 3