Reputation: 224
I code inside a virtual machine( Linux Ubuntu) that's installed on a windows.
According to this page, the CLOCKS_PER_SEC value in the library on Linux should always be 1 000 000.
When I run this code:
int main()
{
printf("%d\n", CLOCKS_PER_SEC);
while (1)
printf("%f, %f \n\n", (double)clock(), (double)clock()/CLOCKS_PER_SEC);
return 0;
}
The first value is 1 000 000 as it should be, however, the value that should show the number of seconds does NOT increase at the normal pace (it takes between 4 and 5 seconds to increase by 1)
Is this due to my working on a virtual machine? How can I solve this?
Upvotes: 0
Views: 1565
Reputation: 213338
This is expected.
The clock()
function does not return the wall time (the time that real clocks on the wall display). It returns the amount of CPU time used by your program. If your program is not consuming every possible scheduler slice, then it will increase slower than wall time, if your program consumes slices on multiple cores at the same time it can increase faster.
So if you call clock()
, and then sleep(5)
, and then call clock()
again, you'll find that clock()
has barely increased at all. Even though sleep(5)
waits for 5 real seconds, it doesn't consume any CPU, and CPU usage is what clock()
measures.
If you want to measure wall clock time you will want clock_gettime()
(or the older version gettimeofday()
). You can use CLOCK_REALTIME
if you want to know the civil time (e.g. "it's 3:36 PM") or CLOCK_MONOTONIC
if you want to measure time intervals. In this case, you probably want CLOCK_MONOTONIC
.
#include <stdio.h>
#include <time.h>
int main() {
struct timespec start, now;
clock_gettime(CLOCK_MONOTONIC, &start);
while (1) {
clock_gettime(CLOCK_MONOTONIC, &now);
printf("Elapsed: %f\n",
(now.tv_sec - start.tv_sec) +
1e-9 * (now.tv_nsec - start.tv_nsec));
}
}
The usual proscriptions against using busy-loops apply here.
Upvotes: 4