Reputation: 696
How do I monitor the peak memory consumed by a process in Linux? This is not a program I can internally modify to measure peak memory usage.
I do not really want detailed measurements, nor do I want them to slow down my program excessively.. so valgrind or anything heavyweight is not what I am looking for... And like other posts earlier [Peak memory usage of a linux/unix process, time -v doesn't seem to report memory for my machine...
I can just run top or ps and extract the memory consumed strings for my process id using a simple script. However, my process runs for about 20-30 minutes so I want to be able to log and get the max. I can tolerate coarse grained samples... every 1-minute or so... Specifically how do I-> 1. fork this simple mem-measure script in zsh? 2. kill it when the process-under-test finishes?
Upvotes: 8
Views: 13149
Reputation: 15955
If you only need to monitor a single process, you can do something like this:
#!/bin/bash
PID=$(pidof myprog)
while true; do
printf "%s: %s KB\n" "$(date --iso=sec)" "$(grep VmHWM: /proc/$PID/status | awk '{print $2}')";
sleep 5s;
done
Note that you can easily modify it to print current memory usage (VmRSS
), too, in addition to peak usage that far.
Also note that VmHWM
is max resident memory (RAM) usage, use VmPeak
if you're interested in virtual memory usage.
If your process forks and you want to measure RAM usage in total for the group of processes, you have to use memory cgroup
because that's the only way to actually see short peak usage. Any solution that tries to poll current memory usage is going to miss peaks shorter than the polling rate.
Upvotes: 0
Reputation: 11
A better alternative to measure the peak/high-water RSS memory usage is the cgmemtime
tool available here:
https://github.com/gsauthof/cgmemtime
It's as easy to use as /usr/bin/time and without the slowdown from Valgrind's massif. Besides, it's based on a kernel feature named cgroups, so it's even more accurate than other polling methods.
Upvotes: 1
Reputation: 11
Accurate memory metrics can be given by pagemap kernel interface - utilized in libpagemap library https://fedorahosted.org/libpagemap/. Library provides also userspace utils so you can start monitor memory immediately.
Upvotes: 1
Reputation: 11
Rather than polling /proc a billion times a second, why not just process the output from strace?
http://tstarling.com/blog/2010/06/measuring-memory-usage-with-strace/
Upvotes: 1
Reputation: 981
This depends on which kind of memory you want to monitor.
Monitoring the following M.a.p.d by sorting the number of all the process's one (not all of the thread's) will let you monitor the malloc physical memory each process uses.
You can write a c program to make it even faster but I thought awk was the minimum choice for this purpose.
I would prefer to get the numbers as follows to get the real numbers in least overhead.
You have to sum these up in order to divide what ps shows as RSS and get more accurate numbers not to confuse.
M.a.p.d:
awk '/^[0-9a-f]/{if ($6=="") {anon=1}else{anon=0}} /Private_Dirty/{if(anon) {asum+=$2}else{nasum+=$2}} END{printf "sum=%d\n",asum}' /proc/<pid>/smaps
M.a.p.c:
awk '/^[0-9a-f]/{if ($6=="") {anon=1}else{anon=0}} /Private_Clean/{if(anon) {asum+=$2}else{nasum+=$2}} END{printf "sum=%d\n",asum}' /proc/<pid>/smaps
M.n.p.d:... and so on
Upvotes: 1
Reputation: 17818
Valgrind with massif should not be too heavy, but, I'd recommend using /proc. You can easily write your own monitor script. Here is mine, for your convenience:
#!/bin/bash
ppid=$$
maxmem=0
$@ &
pid=`pgrep -P ${ppid} -n -f $1` # $! may work here but not later
while [[ ${pid} -ne "" ]]; do
#mem=`ps v | grep "^[ ]*${pid}" | awk '{print $8}'`
#the previous does not work with MPI
mem=`cat /proc/${pid}/status | grep VmRSS | awk '{print $2}'`
if [[ ${mem} -gt ${maxmem} ]]; then
maxmem=${mem}
fi
sleep 1
savedpid=${pid}
pid=`pgrep -P ${ppid} -n -f $1`
done
wait ${savedpid} # don't wait, job is finished
exitstatus=$? # catch the exit status of wait, the same of $@
echo -e "Memory usage for $@ is: ${maxmem} KB. Exit status: ${exitstatus}\n"
Upvotes: 2
Reputation:
/proc/pid/smaps, like /proc/pid/maps, only gives information about virtual memory mappings, not actual physical memory usage. top and ps give the RSS, which (depending on what you want to know) may not be a good indicator of memory usage.
One good bet, if you're running on a Linux kernel later than 2.6.28.7, is to use the Pagemap feature. There's a discussion of this and source for some tools at www.eqware.net/Articles/CapturingProcessMemoryUsageUnderLinux.
The page-collect tool is intended to capture memory usage of ALL processes, and so probably imposes a greater CPU burden than you want. You should be easily able to modify it, however, so that it captures data for only a specific process ID. That would reduce the overhead enough so that you could easily run it every few seconds. I haven't tried it, but I think the page-analyze tool should run without change.
EQvan
Upvotes: 1
Reputation: 5870
Just use top -n to iterate a specified number of times, and -d to delay between updates. Also you can grab only the output relevant to your process by grepping its pid, like:
top -n 30 -d 60 | grep <process-id>
Read the top manual page for more information
man top
Of course, you can also grab the column you need by using awk.
Upvotes: 8
Reputation: 8972
Actually, what I said before:
"""
try
/usr/bin/time -v yourcommand
that should help. if you use only "time", bash will execute the built-in (that does not have "-v")
"""
does not work (returns 0).
I made the following perl script (that I called smaps):
#!/usr/bin/perl
use 5.010;
use strict;
use warnings;
my $max = 0;
while( open my $f, '<', "/proc/$ARGV[0]/smaps" ) {
local $/; $_ = <$f>;
$max = $1 if /Rss:\s*(\d+)/ and $1 > $max;
open my $g, '>', '/tmp/max';
say $g $max
}
And then I call it (for instance, to watch qgit's memory usage):
bash -c './smaps $$ & exec qgit'
Use single quotes so the "daughter" shell interprets $$
(that will be the same PID of qgit after the exec
). this answer, I tested :-D
HTH
Upvotes: 3
Reputation: 2947
You could use a munin-node plugin to do this, but it's a little heavyweight. http://munin.projects.linpro.no/
Upvotes: 1