Lightness Races in Orbit
Lightness Races in Orbit

Reputation: 385254

Is it "safe" on Linux to mix `new[]` and `delete`?

Someone on IRC claimed that, although allocating with new[] and deleting with delete (not delete[]) is UB, on Linux platforms (no further details about the OS) it would be safe.

Is this true? Is it guaranteed? Is it to do with something in POSIX that specifies that dynamically-allocated blocks should not have metadata at the start?

Or is it just completely untrue?


Yes, I know I shouldn't do it. I never would.
I am curious about the veracity of this idea; that's it!


By "safe", I mean: "will not cause behaviour other than were the original allocation performed by new, or were the de-allocation performed by delete[]". This means that we might see 1 "element" destruction or n, but no crashing.

Upvotes: 2

Views: 747

Answers (5)

LiKao
LiKao

Reputation: 10658

It is definitely not safe as you can simply try out with the following code:

#include<iostream>

class test {
public:
  test(){ std::cout << "Constructor" << std::endl; }
  ~test(){ std::cout << "Destructor" << std::endl; }
};

int main() {
  test * t = new test[ 10 ];
  delete t;
  return 1;
}

Have a look at http://ideone.com/b8BiQ . It fails misserably.

It may work when you do not use classes, but only fundamental types, but even that is not guaranteed.

EDIT: Some explanations for those of you who want to know why this crashes:

new and delete mainly serve as wrappers around malloc(), hence calling free() on a newed pointer is most of the time "safe" (remember to call the destructor), but you should not rely on it. For new[] and delete[] however the situation is more complicated.

When an array of classes gets constructed using new[] each default constructor will be called in turn. When you do delete[] each destructor gets called. However each destructor also has to be supplied a this pointer to use inside as a hidden parameter. So before calling the destructor the program has to find the locations of all objects within the reserved memory, to pass these locations as this pointers to the destructor. So all information that is later needed to reconstruct this information needs to be stored somewhere.

Now the easiest way would be to have a global map somewhere around, which stores this information for all new[]ed pointers. In this case if you delete is called instead of delete[] only one of the destructors would be called and the entry would not be removed from a map. However this method is usually not used, because maps are slow and memory management should be as fast as possible.

Hence for the stdlibc++ a different solution is used. Since only a few bytes are needed as additional information, it is the fastest to just over-allocate by these few bytes, store the information at the beginning of the memory and return the pointer to the memory after the bookkeeping. So if you allocate an array of 10 objects of 10 bytes each, the programm will allocate 100+X bytes where X is the size of the data which is needed to reconstruct the this.

So in this case it looks something like this

| Bookkeeping | First Object | Second Object |....
^             ^
|             This is what is returned by new[]
|
this is what is returned by malloc()

So in case you pass the pointer you have recieved from new[] to delete[] it will call all destructors, then substract X from the pointer and give that one to free(). However if you call delete instead, it will call a destructor for the first object and then immediately pass that pointer to free(), which means free() has just been passed a pointer which was never malloced, which means the result is UB.

Have a look at http://ideone.com/tIiMw , to see what gets passed to delete and delete[]. As you can see, the pointer returned from new[] is not the pointer which was allocated inside, but 4 is added to it before it is being returned to main(). When calling delete[] correctly the same four is substracted an we get the correct pointer within delete[] however this substraction is missing when calling delete and we get the wrong pointer.

In case of calling new[] on a fundamental type, the compiler immediately knows that it will not have to call any destructors later and it just optimizes the bookkeeping away. However it is definitely allowed to write bookkeeping even for fundamental types. And it is also allowed to add bookkeeping in case you call new.

This bookkeeping in front of the real pointer is actually a very good trick, in case you ever need to write your own memory allocation routines as a replacement of new and delete. There is hardly any limit on what you can store there , so one should never assume that anything returned from new or new[] was actually returned from malloc().

Upvotes: 2

Stack Overflow is garbage
Stack Overflow is garbage

Reputation: 248099

Of course it's not true. That person is mixing up several different concerns:

  • how does the OS handle allocations/deallocations
  • correct calls to constructors and destructors
  • UB means UB

On the first point, I'm sure he's correct. It is common to handle both in the same way on that level: it is simply a request for X bytes, or a request to release the allocation starting at address X. It doesn't really matter if it's an array or not.

On the second point, everything falls apart. new[] calls the constructor for each element in the allocated array. delete calls the destructor for the one element at the specified address. And so, if you allocate an array of objects, and free it with delete, only one element will have its destructor invoked. (This is easy to forget because people invariably test this with arrays of ints, in which case this difference is unnoticeable)

And then there's the third point, the catch-all. It's UB, and that means it's UB. The compiler may make optimizations based on the assumption that your code does not exhibit any undefined behavior. If it does, it may break some of these assumptions, and seemingly unrelated code might break.

Upvotes: 13

Adrian Ratnapala
Adrian Ratnapala

Reputation: 5703

I expect that new[] and delete[] just boil down to malloc() and free() under Linux (gcc, glibc, libstdc++), except that the con(de)structors get called. The same for new and delete except that the con(de)structors get called differently. This means that if his constructors and destructors don't matter, then he can probably get away with it. But why try?

Upvotes: 0

sharptooth
sharptooth

Reputation: 170509

This question discusses in great details when exactly mixing new[] and delete looks safe (no observable problems) on Visual C++. I suppose that by "on Linux" you actually mean "with gcc" and I've observed very similar results with gcc on ideone.com.

Please note that this requires:

  1. global operator new() and operator new[]() functions to be implemented identically and
  2. the compiler optimizing away the "prepend with number of elements" allocation overhead

and also only works for types with trivial destructors.

Even with these requirements met there's no guarantee it will work on a specific version of a specific compiler. You'll be much better off simply not doing that - relying on undefined behavior is a very bad idea.

Upvotes: 3

spraff
spraff

Reputation: 33425

Even if it happens to be safe on some environment, don't do it. There's no reason to want to do it.

Even if it did return the right memory to the OS, the destructors wouldn't be called properly.

It's definitely not true for all or even most Linuxes, your IRC friend is talking bollocks.

POSIX has nothing to do with C++. In general, this is unsafe. If it works anywhere, it's because of the compiler and library, not the OS.

Upvotes: 7

Related Questions