user1527047
user1527047

Reputation:

Memory of a variable or object automatically terminated of finish at the end of program than why we use destructor?

In the following program we are creating Circle object in local scope because we are not using new keyword. We know that memory of a variable or object automatically terminated of finish at the end of program than why we use destruct?

#include<iostream>
using namespace std;     
class Circle //specify a class
{
    private :
        double radius; //class data members
    public:
        Circle() //default constructor
        {
            radius = 0;
        }           
        void setRadius(double r) //function to set data
        {
            radius = r;
        }
        double getArea()
        {
            return 3.14 * radius * radius;
        }
        ~Circle() //destructor
        {} 
};

int main()
{
    Circle c; //defalut constructor invoked   
    cout << c.getArea()<<endl;     
    return 0;
}

Upvotes: 0

Views: 111

Answers (3)

bolov
bolov

Reputation: 75854

Well, first of all, you don’t need to explicitly define a destructor. One will automatically be defined by the compiler. As a side note if you do, by the rule of the 3, or the 5 in c++11 if you declare any of the following: copy constructor, copy assignment, move constructor (c++11), move assignment (c++11) or destructor you should explicitly define all of them.

Moving on. Oversimplified, the RAII principle states that every resource allocated must be deallocated. Furthermore, over every resource allocated must exist one and only one owner, an object responsible for dealocating the resource. That’s resource management. Resource here can mean anything that has to initialized before use and released after use, e.g. dynamically allocated memory, system handler (file handlers, thread handlers), sockets etc. The way that is achieved is through constructors and destructors. If your object is responsible of destroying a resource, then the resource should be destroyed when your object dies. Here comes in play the destructor.

Your example is not that great since your variable lives in main, so it will live for the entirely of the program.

Consider a local variable inside a function:

int f()
{
    Circle c; 
    // whatever    
    return 0;
}

Every time you call the function f, a new Circle object is created and it’s destroyed when the function returns.

Now as an exercise consider what is wrong with the following program:

std::vector foo() {
  int *v = new int[100];

  std::vector<int> result(100);

  for (int i = 0; i < 100; ++i) {
    v[i] = i * 100 + 5;
  }


  //
  //  .. some code
  //

  for (int i = 0; i < 100; ++i) {
    result.at(i) = v[i];
  }

  bar(result);

  delete v;

  return result;
}

Now this is a pretty useless program. However consider it from the perspective of correctness. You allocate an array of 100 ints at the beginning of the function and then you deallocate them at the end of the function. So you might think that that is ok and no memory leaks occur. You could’t be more wrong. Remember RAII? Who is responsible for that resource? the function foo? If so it does a very bad job at it. Look at it again:

std::vector foo() {
  int *v = new int[100];

  std::vector<int> result(100); <-- might throw

  for (int i = 0; i < 100; ++i) {
    v[i] = i * 100 + 5;
  }

  //
  //  .. some code              <-- might throw in many places
  //

  for (int i = 0; i < 100; ++i) {
    result.at(i) = v[i];       <-- might (theoretically at least) throw
  }

   bar(result);                <-- might throw


  delete v;

  return result;
}

If at any point the function throws, the delete v will not be reached and the resource will never be deleted. So you must have a clear resource owner responsible with the destruction of that resource. What do you know the constructors and destructors will help us:

class Responsible() { // looks familiar? take a look at unique_ptr
  private:
    int * p_ = nullptr;
  public:
    Responsible(std::size_t size) {
      p_ = new int[size];
    }
    ~Responsible() {
      delete p_;
    }
    // access methods (getters and setter)
};

So the program becomes:

std::vector foo() {
  Responsible v(100);      

  //
  //  .. some code
  //

  return result;
}

Now even if the function will throw the resource will be properly managed because when an exception occurs the stack is unwinded, that is all the local variables are destroyed well... lucky us, the destructor of Responsible will be invoked.

Upvotes: 2

Doonyx
Doonyx

Reputation: 590

Assuming memory as an infinite resource is VERY dangerous. Think about a real-time application which needs to run 24x7 and listen to a data feed at a high rate (let' say 1,000 messages per second). Each message is around 1KB and each time it allocates a new memory block (in heap obviously) for each message. Altogether, we need around 82 GB per day. If you don't manage your memory, now you can see what will happen. I'm not talking about sophisticated memory pool techniques or alike. With a simple arithmetic calculation, we can see we can't store all messages in memory. This is another example that you have think about memory management (from both allocation and deallocation perspectives).

Upvotes: 2

D&#225;vid Szab&#243;
D&#225;vid Szab&#243;

Reputation: 2247

Well, sometimes your object can have pointers or something that needs to be deallocated or such.

For example if you have a poiner in you Circle class, you need to deallocate that to avoid memory leak.

Atleast this is how i know.

Upvotes: 0

Related Questions