Reputation: 2842
I have a script that has memory leaks. I believe this is so because after I perform undef
on my nested objects, the amount of memory in script is unchanged. I have used Devel::Cycle to locate any cyclical references and I have turned those cyclical references into weak references with Scalar::Util
. The problem still remains.
Now I am trying to use Valgrind to solve the issue. As a first start with valgrind, I tested things out a perl hello world program:
#! /usr/bin/perl
use strict;
use warnings;
print "Hello world!\n";
Here was the valgrind output when running valgrind --trace-children=yes perl ./hello_world.pl
:
==12823== HEAP SUMMARY:
==12823== in use at exit: 290,774 bytes in 2,372 blocks
==12823== total heap usage: 5,159 allocs, 2,787 frees, 478,873 bytes allocated
==12823==
==12823== LEAK SUMMARY:
==12823== definitely lost: 13,981 bytes in 18 blocks
==12823== indirectly lost: 276,793 bytes in 2,354 blocks
==12823== possibly lost: 0 bytes in 0 blocks
==12823== still reachable: 0 bytes in 0 blocks
==12823== suppressed: 0 bytes in 0 blocks
==12823== Rerun with --leak-check=full to see details of leaked memory
==12823==
==12823== For counts of detected and suppressed errors, rerun with: -v
==12823== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 6 from 6)
My understanding, from here, is that when the number of allocs
does not equal the number of frees
you have a memory leak.
Since all I'm doing is printing hello world, I'm forced to ask the question, does the Perl interpreter itself, here v5.10.1, have at least its own memory leak or am I interpreting things all wrong?
I would like to understand this before I tackle my actual perl script.
ADDENDUM
I see in Perl 5.12.0 delta, the following:
A weak reference to a hash would leak. This was affecting DBI [RT #56908].
This may ultimately apply to my complete perl script, and not this hello world program, but it's leading me to think that I should go through the pain of installing the latest version of perl as non-root.
ADDENDUM2
I installed activestate perl 5.16.3, and the problem, and also my actual script's problems, still remains.
I suspect that in the case of this hello world program, I must be using/interpreting valgrind improperly but I don't understand where yet.
UPDATE1 Daxim's answer does make a difference. When I introduce the following line in my perl script:
use Perl::Destruct::Level level => 1;
Then the valgrind output is:
==29719== HEAP SUMMARY:
==29719== in use at exit: 1,617 bytes in 6 blocks
==29719== total heap usage: 6,499 allocs, 6,493 frees, 585,389 bytes allocated
==29719==
==29719== LEAK SUMMARY:
==29719== definitely lost: 0 bytes in 0 blocks
==29719== indirectly lost: 0 bytes in 0 blocks
==29719== possibly lost: 0 bytes in 0 blocks
==29719== still reachable: 1,617 bytes in 6 blocks
==29719== suppressed: 0 bytes in 0 blocks
==29719== Rerun with --leak-check=full to see details of leaked memory
==29719==
==29719== For counts of detected and suppressed errors, rerun with: -v
==29719== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 6 from 6)
Which is a substantial difference. My own memory leak problems with my script remain but at least this hello world program now seems sensible to valgrind.
This whole thing though begs the question, what is the point of stopping hard cyclical references with Scalar::Util
if no memory is freed until the program exits, baring the use of this somewhat esoteric Perl::Destruct::Level
module???
Upvotes: 5
Views: 797
Reputation: 50338
As I understand it (and please correct me if I'm wrong — I haven't read the code, and the documentation seems pretty scant), the point is that, in a typical Perl program, there's a lot of stuff that will be allocated but that won't be freed before the program ends. This includes things like global variables, but also, for example, the compiled program code itself.
Now, perl could free the memory used for all that stuff when the program ends, but it turns out that there's usually not much point, since the OS will do it anyway. One exception is when something like valgrind is looking to see if there's any unfreed memory left at the end of the program, and assuming that any such memory must've been leaked.
So, basically, perl is normally lazy and won't bother freeing memory when it knows it's just about to exit and let the OS take care of it. Setting PERL_DESTRUCT_LEVEL tells perl to clean up that memory anyway, just to show tools like valgrind that it wasn't actually leaked.
Anyway, none of this, AFAIK, affects memory management during program execution. If you reduce the reference count of something to zero while you're program is running (e.g. by letting a variable go out of scope, or by directly overwriting the last reference), it's going to get freed, regardless of PERL_DESTRUCT_LEVEL.
However, note that this typically only frees the memory for perl itself to reuse — aside from some rare cases like explicit mmap(), it's quite unusual for any program running on a modern OS with virtual memory to actually release memory pages back to the OS while it's running.
Upvotes: 0
Reputation: 39158
The leak is intentional. vincent in #p5p comments:
Allocated memory isn't actually released unless PERL_DESTRUCT_LEVEL is set because it's faster to let the operating system handle this. PERL_DESTRUCT_LEVEL is only available for debugging builds or by using Perl::Destruct::Level from perl.
Upvotes: 4