pkdc
pkdc

Reputation: 173

Not deleting dynamically allocated memory and let it get free by the OS after programme termination

I have read this question and answer dynamically allocated memory after program termination, and I want to know if it is okay to NOT delete dynamically allocated memory and let it get freed by the OS after programme termination. So, if I have allocated some memory for objects that I need thoughout the programme, is it ok to skip deleting them at the end of the programme, in order to make the code run faster?

Upvotes: 0

Views: 455

Answers (3)

Christian Hackl
Christian Hackl

Reputation: 27548

First of all, the number one rule in C++ is:

Avoid dynamic allocation unless you really need it!

Don't use new lightly; not even if it's safely wrapped in std::make_unique or std::make_shared. The standard way to create an instance of a type in C++ is:

T t;

In C++, you need dynamic allocation only if an object should outlive the scope in which it was originally created.

If and only if you need to dynamically allocate an object, consider using std::shared_ptr or std::unique_ptr. Those will deallocate automatically when the object is no longer needed.

Second,

is it ok to skip deleting them at the end of the programme, in order to make the code run faster?

Absolutely not, because of the "in order to make the code run faster" part. This would be premature optimisation.


Those are the basic points.

However, you still have to consider what constitutes a "real", or bad memory leak.

Here is a bad memory leak:

#include <iostream>

int main()
{
    int count;
    std::cin >> count;
    for (int i = 0; i < count; ++i)
    {
        int* memory = new int[100];
    }
}

This is not bad because the memory is "lost forever"; any remotely modern operating system will clean everything up for you once the process has gone (see Kerrek SB's answer in your linked question).

It is bad because memory consumption is not constant when it could be; it will unnecessarily grow with user input.

Here is another bad memory leak:

void OnButtonClicked()
{
    std::string* s = new std::string("my"); // evil!
    label->SetText(*s + " label");
}

This piece of (imaginary and slightly contrived) code will make memory consumption grow with every button click. The longer the program runs, the more memory it will take.

Now compare this with:

int main()
{
    int* memory = new memory[100];
}

In this case, memory consumption is constant; it does not depend on user input and will not become bigger the longer the program runs. While stupid for such a tiny test program, there are situations in C++ where deliberately not deallocating makes sense.

Singleton comes to mind. A very good way to implement Singleton in C++ is to create the instance dynamically and never delete it; this avoids all order-of-destruction issues (e.g. SettingsManager writing to Log in its destructor when Log was already destroyed). When the operating system clears the memory, no more code is executed and you are safe.

Chances are that you will never run into a situation where it's a good idea to avoid deallocation. But be wary of "always" and "never" rules in software engineering, especially in C++. Good memory management is much harder than matching every new with a delete.

Upvotes: 0

Non-maskable Interrupt
Non-maskable Interrupt

Reputation: 3911

Most sane OS release all memory and local resources own by the process upon termination (the OS may do it in lazy manner, or reduce their share counter, but it does not matter much on the question). So, it is safe to skip releasing those resources.

However, it is very bad habit, and you gain almost nothing. If you found releasing object takes long time (like walking in a very long list of objects), you should refine your code and choose a better algorithm.

Also, although the OS will release all local resources, there are exceptions like shared memory and global space semaphore, which you are required to release them explicitly.

Upvotes: 3

gd1
gd1

Reputation: 11413

The short answer is yes, you can, the long answer is possibly you better not do that: if your code needs to be refactored and turned into a library, you are delivering a considerable amount of technical debt to the person who is going to do that job, which could be you.

Furthermore, if you have a real, hard-to-find memory leak (not a memory leak caused by you intentionally not freeing long-living objects) it's going to be quite time consuming to debug it with valgrind due to a considerable amount of noise and false positives.

Have a look at std::shared_ptr and std::unique_ptr. The latter has no overhead.

Upvotes: 9

Related Questions