Nocta
Nocta

Reputation: 199

Why use new and delete at all?

I'm new to C++ and I'm wondering why I should even bother using new and delete? It can cause problems (memory leaks) and I don't get why I shouldn't just initialize a variable without the new operator. Can someone explain it to me? It's hard to google that specific question.

Upvotes: 6

Views: 5753

Answers (8)

James Kanze
James Kanze

Reputation: 153909

First, if you don't need dynamic allocation, don't use it.

The most frequent reason for needing dynamic allocation is that the object will have a lifetime which is determined by the program logic rather than lexical scope. The new and delete operators are designed to support explicitly managed lifetimes.

Another common reason is that the size or structure of the "object" is determined at runtime. For simple cases (arrays, etc.) there are standard classes (std::vector) which will handle this for you, but for more complicated structures (e.g. graphs and trees), you'll have to do this yourself. (The usual technique here is to create a class representing the graph or tree, and have it manage the memory.)

And there is the case where the object must be polymorphic, and the actual type won't be known until runtime. (There are some tricky ways of handling this without dynamic allocation in the simplest cases, but in general, you'll need dynamic allocation.) In this case, std::unique_ptr might be appropriate to handle the delete, or if the object must be shared, std::shared_ptr (although usually, objects which must be shared fall into the first category, above, and so smart pointers aren't appropriate).

There are probably other reasons as well, but these are the three that I've encountered the most often.

Upvotes: 2

Yakk - Adam Nevraumont
Yakk - Adam Nevraumont

Reputation: 275330

Modern C++ styles often frown on explicit calls to new and delete outside of specialized resource management code.

This is not because the stack/automatic storage is sufficient, but rather because RAII smart resource owners (be they containers, shared pointers, or something else) make almost all direct memory wrangling unnessecary. And as the problem of memory management is often error prone, this makes your code more robust, easier to read, and sometimes faster (as the fancy resource owners can use techniques you might not bother with everywhere).

This is exemplified by the rule of zero: write no destructor, copy/move assign, copy/move constructor. Store state in smart storage, and have it handle it for you.

None of the above applies when you yourself are writing smart memory owning classes. This is a rare thing to need to do, however. It also requires C++14 (for make_unique) to get rid of the penultimate excuse to call new.

Now, the free store is still used, just not directly, under the above style. The free store (aka heap) is needed because automatic storage (aka the stack) only supports really simple object lifetime rules (scope based, compile time deterministic size and count, FILO order). As runtime sized and counted data is common, and object lifetime is often not that simple, the free store is used by most programs. Sometimes copying an object around on the stack is enough to make the simple lifetime less of a problem, but at other times identity is important.

The final reason is stack overflow. On some C++ implementations the stack/automatic storage is seriously constrained in size. What more is that there is rarely if ever a reliable failure mode when you put to much stuff in it. By storing large data on the free store, we can reduce the chance the stack will overflow.

Upvotes: 2

Martin James
Martin James

Reputation: 24847

Another reason may be if you are explicitly calling an external library or API with a C-style interface. Setting up a callback in such cases often means context data must be supplied and returned in the callback, and such an interface usually provides only a 'simple' void* or int*. Allocating an object or struct with new is appropriate for such actions, (you can delete it later in the callback, should you need to).

Upvotes: 1

For historical and efficiency reasons, C++ (and C) memory management is explicit and manual.

Sometimes, you might allocate on the call stack (e.g. by using VLAs or alloca(3)). However, that is not always possible, because

  1. stack size is limited (depending on the platform, to a few kilobytes or a few megabytes).
  2. memory need is not always FIFO or LIFO. It does happen that you need to allocate memory, which would be freed (or becomes useless) much later during execution, in particular because it might be the result of some function (and the caller - or its caller - would release that memory).

You definitely should read about garbage collection and dynamic memory allocation. In some languages (Java, Ocaml, Haskell, Lisp, ....) or systems, a GC is provided, and is in charge of releasing memory of useless (more precisely unreachable) data. Read also about weak references. Notice that most GCs need to scan the call stack for local pointers.

Notice that it is possible, but difficult, to have quite efficient garbage collectors (but usually not in C++). For some programs, Ocaml -with a generational copying GC- is faster than the equivalent C++ code -with explicit memory management.

Managing memory explicitly has the advantage (important in C++) that you don't pay for something you don't need. It has the inconvenience of putting more burden on the programmer.

In C or C++ you might sometimes consider using the Boehm's conservative garbage collector. With C++ you might sometimes need to use your own allocator, instead of the default std::allocator. Read also about smart pointers, reference counting, std::shared_ptr, std::unique_ptr, std::weak_ptr, and the RAII idiom, and the rule of three (in C++, becoming the rule of 5). The recent wisdom is to avoid explicit new and delete (e.g. by using standard containers and smart pointers).

Be aware that the most difficult situation in managing memory are arbitrary, perhaps circular, graphs (of reference).

On Linux and some other systems, valgrind is a useful tool to hunt memory leaks.

Upvotes: 6

Sanjay Soni
Sanjay Soni

Reputation: 227

When you are using New then your object stores in Heap, and it remains there until you don't manually delete it. but in the case without using new your object goes in Stack and it destroys automatically when it goes out of scope. Stack is set to a fix size, so if there is no any block for assign a new object then Stack Overflow occurs. This often happens when a lot of nested functions are being called, or if there is an infinite recursive call. If the current size of the heap is too small to accommodate new memory, then more memory can be added to the heap by the operating system.

Upvotes: 1

6502
6502

Reputation: 114461

Indeed in many cases new and delete are not needed, you can just use standard containers instead and leaving to them the allocation/deallocation management.

One of the reasons for which you may need to use allocation explicitly is for objects where the identity is important (i.e. they are not just values that can be copied around).

For example if you have a gui "window" object then making copies probably doesn't make sense and thus you're more or less ruling out all standard containers (they're designed for objects that can be copied and assigned). In this case if the object needs to survive the function that creates it probably the simplest solution is to just allocate explicitly it on the heap, possibly using a smart pointer to avoid leaks or use-after-delete.

In other cases it may be important to avoid copies not because they're illegal, but just not very efficient (big objects) and explicitly handling the instance lifetime may be a better (faster) solution.

Another case where explicit allocation/deallocation may be the best option are complex data structures that cannot be represented by the standard library (for example a tree in which each node is also part of a doubly-linked list).

Upvotes: 2

vz0
vz0

Reputation: 32923

Only on simple programs you can know beforehand how much memory you'd use. In general you can not foresee how much memory you'd use.

However with modern C++11 you generally rely on standard libraries like vector and map for memory allocation, and the use of smart pointers helps you avoid memory leaks, so you don't really need to use new and delete explicitly by hand.

Upvotes: 1

Bathsheba
Bathsheba

Reputation: 234665

The alternative, allocating on the stack, will cause you trouble as stack sizes are often limited to Mb magnitudes and you'll get lots of value copies. You'll also have problems sharing stack-allocated data between function calls.

There are alternatives: using std::shared_ptr (C++11 onwards) will do the delete for you once the shared pointer is no longer being used. A technique referred to by the hideous acronym RAII is exploited by the shared pointer implementation. I mention it explicitly since most resource cleanup idioms are RAII-based. You can also make use of the comprehensive data structures available in the C++ Standard Template Library which eliminate the need to get your hands too dirty with explicit memory management.

But formally, every new must be balanced with a delete. Similarly for new[] and delete[].

Upvotes: 3

Related Questions