Luke
Luke

Reputation: 31

Measuring temperature while benchmarking on Linux

I want to measure the temperature and frequency of a two socket machine while performing the High-Performance Linpack benchmark.

I wrote a shell script sensor.sh that I start in the background with sh sensors.sh & and then proceed to the benchmark.

for ((;;))
do
awk 'BEGIN{ORS=" ";} $2=="MHz" {print $4} END {print "\n"}' /proc/cpuinfo >> cpuf.dat

awk 'BEGIN{ORS=" ";} {print $1} END {print "\n"}' /sys/devices/platform/coretemp.?/hwmon/hwmon?/temp*_input >> cput.dat
 sleep .1
done

I get my outputfiles, however the timestamps are not 0.1s apart from each other. I guess the system is busy and the shell script process is not executed that often. HPL says it has a runtime of ~1100s, in this time my temperature.dat file generated ~4600 entries.

Is there another way I can measure the temperature and frequency while performing my benchmark program and store the output in a .dat file?

Upvotes: 0

Views: 623

Answers (2)

Peter Cordes
Peter Cordes

Reputation: 363999

Your script is very inefficient, requiring a lot of separate processes to each get some CPU time before it can run the next sleep .1 so yes, system load is going to make it run less frequently.

Also, sensors is relatively expensive; maybe use command line options to have it only check CPU temp. Or I think CPU temp is available from files in /proc or /sys directly.


xargs with no arguments defaults to echo, so it's just an inefficient way to collapse whitespace characters (including newline) to spaces. (If you use printf '%s\n' foo bar | strace -f xargs you can see it actually does fork + execve /bin/echo instead of simply printing the output itself like you could do with sed or tr.)

you could use more efficient commands for text processing that need less CPU time, and fewer context switches by piping through fewer separate processes. e.g. sensors piped into just one awk command that does all the text processing. And sed -n 's/cpu MHz : //p' /proc/cpuinfo >> frequency.dat to avoid a useless use of cat (and xargs).

But that's still going to have some overhead.


You could write one perl script that pipes from sensors, and closes/reopens /proc/cpuinfo. That would avoid all the system calls that process startup makes.

Instead of sleeping for a fixed time, you can have it check the current time and sleep until the next multiple of 0.1 seconds. You could do that with bash, too, but that will require running even more commands and you want to cause as few context switches as possible for your benchmark.


You could also or instead label each line with current time, so you know when each sample came from. To do this without needing to start another external process, use bash's $EPOCHREALTIME. Like { echo -n "$EPOCHREALTIME "; awk ...; } >> cpuf.dat if you're still using bash instead of Perl.

Upvotes: 2

Barmar
Barmar

Reputation: 780688

You could try running your code with high priority, so it will be less impacted by the load of the benchmark. But you need to run as root to be able to use negative niceness.

nice -n -10 bash
for ((;;))
do
 sensors | grep Core | awk '{print $3}' | tr '+' ' '  | tr '°C' ' ' | xargs >> temperature.dat
 cat /proc/cpuinfo | grep "cpu MHz" | tr "cpu MHz : " " " | xargs >> frequency.dat
 sleep .1
done
exit

Upvotes: 2

Related Questions