Reputation: 131
I have trying to write c code where some numerical calculations containing time derivatives need to be performed in a real-time dynamic system setting. For this purpose, I need the most accurate time assessment possible from one cycle to the next in a variable called "dt":
static clock_t prev_time;
prev_time = clock();
while(1){
clock_t current_time = clock();
//do something
double dt = (double)(current_time - prev_time)/CLOCKS_PER_SEC;
prev_time = current_time;
}
However, when I test this code by integrating (continuously adding) dt
, i.e
static double elapsed_time = 0;
//the above-mentioned declaration and initialization in done here
while(1){
//the above-mentioned code is executed here
elapsed_time += dt;
printf("elapsed_time : %lf\n", elapsed_time);
}
I obtain results that are significantly lower than reality, by a factor that is constant at runtime, but seems to vary when I edit unrelated parts of the code (it ranges between 1/10 and about half the actual time).
My current guess is that clock()
doesn't account for time required for memory access (at several points throughout the code, I open an external textfile and save data in it for diagnosis purposes).
But I am not sure if this is the case. Also, I couldn't find any other way to accurately measure time.
Does anyone why this is happening?
EDIT: the code is compiled and executed on a raspberry pi 3, and is used to implement a feedback controler
Upvotes: 1
Views: 241
Reputation: 131
UPDATE: As it turns out, clock()
is not suitable for real-time numerical calculation, as it only accounts for the time used on the processor. I solved the problem by using the clock_gettime()
function defined in <time.h>
. This returns the global time with a nanosecond resolution (of course, this comes with a certain error which is dependent on your clock resolution, but is is about the best thing out there).
The answer was surprisingly hard to find, so thanks to Ian Abbott for the very useful help
Upvotes: 3
Reputation: 64865
Does anyone why this is happening?
clock measures CPU time, which is the amount of time slices/clock ticks that the CPU spent on your process. This is useful when you do microbenchmarking as you often don't want to measure time that the CPU actually spent on other processes in a microbenchmark, but it's not useful when you want to measure how much time actually passes between events.
Instead, you should use time or gettimeofday to measure wall clock time.
Upvotes: 0