Reputation: 179
I have a script that measures the execution time of a program using the bash builtin command time
.
I am trying to understand the precision of this command: as far as I know it returns the precision in ms, however it use the getrusage()
function which returns a value in microseconds. But reading this paper, the real precision is only of 10ms because getrusage
samples the time relying on ticks (= 100Hz). The paper is really old (it mentions Linux 2.2.14 running on a Pentium 166Mhz with 96Mb of ram).
Is time
still using getrusage()
and 100 Hz ticks or is more precise on modern systems?
The testing machine is running Linux 2.6.32.
EDIT: This is a slightly modified version (which should compile also on old version of GCC) of muru's code: modify the value of variable 'v' change also the delay between measures in order to spot the minimum granularity. A value around 500,000 should trigger changes of 1ms on a relatively recent cpu (first versions of i5/i7 @~2.5Ghz)
#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>
void dosomething(){
long v = 1000000;
while (v > 0)
v--;
}
int main()
{
struct rusage r1, r2;
long t1, t2, min, max;
int i;
printf("t1\tt2\tdiff\n");
for (i = 0; i<5; i++){
getrusage(RUSAGE_SELF, &r1);
dosomething();
getrusage(RUSAGE_SELF, &r2);
t1 = r1.ru_stime.tv_usec + r1.ru_stime.tv_sec*1000000 + r1.ru_utime.tv_usec + r1.ru_utime.tv_sec*1000000;
t2 = r2.ru_stime.tv_usec + r2.ru_stime.tv_sec*1000000 + r2.ru_utime.tv_usec + r2.ru_utime.tv_sec*1000000;
printf("%ld\t%ld\t%ld\n",t1,t2,t2-t1);
if ((t2-t1 < min) | (i == 0))
min = t2-t1;
if ((t2-t1 > max) | (i == 0))
max = t2-t1;
dosomething();
}
printf("Min = %ldus Max = %ldus\n",min,max);
return 0;
}
However the precision is bound to linux version: with Linux 3 and above the precision is in the order of us, while on linux 2.6.32 could be around 1ms, probably depending also on the specific distro. I suppose that this difference is related to the usage of HRT isntead of Tick on recent linux versions.
In any case the maximum precision of time is 1ms on all recent and not-so-recent machines.
Upvotes: 4
Views: 829
Reputation: 4887
The bash
builtin time
still uses getrusage(2)
. On an Ubuntu 14.04 system:
$ bash --version
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
$ strace -o log bash -c 'time sleep 1'
real 0m1.018s
user 0m0.000s
sys 0m0.001s
$ tail log
getrusage(RUSAGE_SELF, {ru_utime={0, 0}, ru_stime={0, 3242}, ...}) = 0
getrusage(RUSAGE_CHILDREN, {ru_utime={0, 0}, ru_stime={0, 530}, ...}) = 0
write(2, "\n", 1) = 1
write(2, "real\t0m1.018s\n", 14) = 14
write(2, "user\t0m0.000s\n", 14) = 14
write(2, "sys\t0m0.001s\n", 13) = 13
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
exit_group(0) = ?
+++ exited with 0 +++
As the strace
output shows, it calls getrusage
.
As for precision, the rusage
struct that getrusage
uses includes timeval
objects, and timeval
has microsecond precision. From the manpage of getrusage
:
ru_utime
This is the total amount of time spent executing in user mode,
expressed in a timeval structure (seconds plus microseconds).
ru_stime
This is the total amount of time spent executing in kernel
mode, expressed in a timeval structure (seconds plus
microseconds).
I think it does better than 10 ms. Take the following example file:
#include <sys/time.h>
#include <sys/resource.h>
#include <stdio.h>
int main()
{
struct rusage now, then;
getrusage(RUSAGE_SELF, &then);
getrusage(RUSAGE_SELF, &now);
printf("%ld %ld\n",
then.ru_stime.tv_usec + then.ru_stime.tv_sec*1000000 + then.ru_utime.tv_usec + then.ru_utime.tv_sec*1000000
now.ru_stime.tv_usec + now.ru_stime.tv_sec*1000000 + now.ru_utime.tv_usec + now.ru_utime.tv_sec*1000000);
}
Now:
$ make test
cc test.c -o test
$ for ((i=0; i < 5; i++)); do ./test; done
447 448
356 356
348 348
347 347
349 350
The reported differences between successive calls to getrusage
are 1 µs and 0 (the minimum). Since it does show a 1 µs gap, the tick must be at most 1 µs.
If it had a 10 ms tick, the differences would be zero, or at least 10000.
Upvotes: 5