brett
brett

Reputation: 5649

memory management issues in C++

I would like to know what are the common memory management issues associated with C and C++. How can we debug these errors.

Here are few i know

1)uninitialized variable use

2)delete a pointer two times

3)writing array out of bounds

4)failing to deallocate memory

5)race conditions

1) malloc passed back a NULL pointer. You need to cast this pointer to whatever you want.

2) for string, need to allocate an extra byte for the end character.

3) double pointers.

4) (delete and malloc) and (free and new) don't go together

5) see what the actual function returns (return code) on failure and free the memory if it fails. 6) check for size allocating memory malloc(func +1)

7) check how u pass the double pointe **ptr to function

8) check for data size for behaviour undefined function call

9) failure of allocation of memory

Upvotes: 3

Views: 1623

Answers (10)

AShelly
AShelly

Reputation: 35520

One you forgot:

6) dereferencing pointer after it has been freed.

So far everyone seems to be answering "how to prevent", not "how to debug".

Assuming you're working with code which already has some of these issues, here are some ideas on debugging.

  • uninitialized variable use

    The compiler can detect a lot of this. Initializing RAM to a known value helps in debugging those that escape. In our embedded system, we do a memory test before we leave the bootloader, which leaves all the RAM set to 0x5555. This turns out to be quite useful for debugging: when an integer == 21845, we know it was never initialized.

    1. delete a pointer two times

Visual Studio should detect this at runtime. If you suspect this is happening in other systems, you can debug by replacing the delete call with custom code something like

void delete( void*p){ assert(*(int*)p!=p); _system_delete(p); *(int*)p=p;}
  • writing array out of bounds

Visual Studio should detect this at runtime. In other systems, add your own sentinels

int headZONE = 0xDEAD;
int array[whatever];
int tailZONE = 0xDEAD;

//add this line to check for overruns 
//- place it using binary search to zero in on trouble spot
assert(headZONE==tailZONE&&tailZONE==0xDEAD)
  • failing to deallocate memory

    Watch the stack growth. Record the free heap size before and after points which create and destroy objects; look for unexpected changes. Possibly write your own wrapper around the memory functions to track blocks.

  • race conditions

    aaargh. Make sure you have a logging system with accurate timestamping.

Upvotes: 0

Loki Astari
Loki Astari

Reputation: 264361

1)uninitialized variable use

Automatically detected by compiler (turn warning to full and treat warnings as errors).

2)delete a pointer two times

Don't use RAW pointers. All pointers should by inside either a smart pointer or some form of RAII object that manages the lifetime of the pointer.

3)writing array out of bounds

Don't do it. Thats a logical bug. You can mitigate it by uisng a container and a method that throws on out of bounds access (vector::at())

4)failing to deallocate memory

Don't use RAW pointers. See (2) above.

5)race conditions

Don't allow them. Allocate resources in priority order to avoid conflicting locks then lock objects when there is a potential for multiple write access (or read access when it is important).

Upvotes: 0

wheaties
wheaties

Reputation: 35970

Preemptively preventing these errors in the first place:

1) Turn warnings to error levels to overcome the uninitialized errors. Compilers will frequently issue such warnings and by having them accessed as errors you'll be forced to fix the problem.

2) Use Smart pointers. You can find a good versions of such things in Boost.

3) Use vectors or other STL containers. Don't use arrays unless you're using one of the Boost variety.

4) Again, use a container object or smart pointer to handle this issue for you.

5) Use immutable data structures everywhere you can and place locks around modification points for shared mutable objects.

Dealing with legacy applications

1) Same as above.

2) Use integration tests to see how different components of your application play out. This should find many cases of such errors. Seriously consider having a formal peer review done by another group writing a different segment of the application who would come into contact with your naked pointers.

3) You can overload the new operator so that it allocates one extra byte before and after an object. These bytes should then be filled with some easily identifiable value such as 0xDEADBEEF. All you then have to do is check the preceeding byte before and after to witness if and when your memory is being corrupted by such errors.

4) Track your memory usage by running various components of your application many times. If your memory grows, check for missing deallocations.

5) Good luck. Sorry, but this is one of those things that can work 99.9% of the time and then, boom! The customer complains.

Upvotes: 6

user3458
user3458

Reputation:

In addition to all already said, use valgrind or Bounds Checker to detect all of these errors in your program (except race conditions).

Upvotes: 2

Arun
Arun

Reputation: 20383

One common pattern I use is the following.

I keep the following three private variables in all allocator classes:

  size_t news_;
  size_t deletes_;
  size_t in_use_;

In the allocator constructor, all of these three are initialized to 0.

Then on, whenever allocator does a new, it increments news_, and whenever allocator does a delete, it increments deletes_

Based on that, I put lot of asserts in the allocator code as:

 assert( news_ - deletes_ == in_use_ );

This works very good for me.

Addition: I place the assert as precondition and postcondition on all non-trivial methods of the allocator. If the assert blovs, then I know I am doing something wrong. If the assert does not blow, with the all the testing I can do, then I get a reasonably sufficient confidence about the memory management correctness of my program.

Upvotes: 0

leander
leander

Reputation: 8717

Take a look at my earlier answer to "Any reason to overload global new and delete?" You'll find a number of things here that will help with early detection and diagnosis, as well as a list of helpful tools. Most of the tools and techniques can be applied to either C or C++.

It's worth noting that valgrind's memcheck will spot 4 of your items, and helgrind may help spot the last (data races).

Upvotes: 0

Nils
Nils

Reputation: 13767

Make sure you understand when to place object on the heap and when on the stack. As a general rule only put objects on the heap if you must, this will safe you lots of trouble.. Learn STL and use containers provided by the standard lib.

Upvotes: 0

Andreas Brinck
Andreas Brinck

Reputation: 52519

  1. Use a good compiler and set warning level to max
  2. Wrap new/malloc and delete/free and bookkeep all allocations/deallocations
  3. Replace raw arrays with an array class that does bounds checking (or use std::vector) (harder to do in C)
  4. See 2.
  5. This is hard, there exists some special debuggers such as jinx that specialize in this but I don't know how good they are.

Upvotes: 0

T.E.D.
T.E.D.

Reputation: 44804

The best technique I know of is to avoid doing pointer operations and dynamic allocation directly. In C++, use reference parameters in preference to pointers. Use stl objects rather than rolling your own lists and containers. Use std::string instead of char *. Failing all that, take Rob K's advice and use RAII wherever you need to do alocations.

For C, there are some simliar things you can try to do, but you are pretty much doomed. Get a copy of Lint and pray for mercy.

Upvotes: 0

Rob K
Rob K

Reputation: 8926

Use RAII (Resource Acquisition Is Initialization). You should almost never be using new and delete directly in your code.

Upvotes: 9

Related Questions