Reputation: 1791
int main (int argc, const char * argv[]) {
clock_t time1, time2;
time1 = clock();
//time1 = gettimeofday();
add();
time2 = clock();
//time2= gettimeofday();
float time = ((float)time2 - (float)time1) / 1000000.0F;
printf("%fs\n", time);
return 0;
}
I end up getting something like 0.288640991s from this (not sure if it's that many figures to the right, from the current code above. But I'm expecting seconds from the line:
float time = ((float)time2 - (float)time1) / 1000000.0F;
When I divide by 10000.0F I get 28 seconds realtime, which is what it actually is. So why isn't the line above doing this?
EDIT: I'm using a Mac, which I've heard is problematic.
Upvotes: 3
Views: 223
Reputation: 145829
First you should use CLOCKS_PER_SEC
and not some magic number.
Anyway if you are in a POSIX system, CLOCKS_PER_SEC
is defined as one million (the same value your are using) in POSIX so this is probably not your issue.
You have to know that clock
only measures the time when the processor is running YOUR userspace program and not the time spent in the kernel or in another process. This is important because if you call a syscall or a function that calls a syscall or if your process is preempted by the kernel, the clock
result could look wrong to you.
See for example this:
time_t time1, time2;
time1 = clock();
sleep(5);
time2 = clock();
the difference of clock
return value before and after (correctly normalized with CLOCKS_PER_SEC
) the sleep
call is likely to be 0 second.
If you want to measure the elapsed time during the execution of a function or a code block you should better use the function gettimeofday
.
Upvotes: 1
Reputation: 117681
clock()
returns the number of clock ticks elapsed since the program was launched.
In order to turn that into seconds you have to divide by CLOCKS_PER_SEC
, not 1000000.0F
.
Shameless self-promotion:
Alternatively you can use my stopwatch.h and stopwatch.c for measuring elapsed time.
Upvotes: 0