Reputation: 2026
I'm using the following code snippet to measure my algorithms running time in terms of clock ticks:
clock_t t;
t = clock();
//run algorithm
t = clock() - t;
printf ("It took me %d clicks (%f seconds).\n",t,((float)t)/CLOCKS_PER_SEC);
However, this returns 0 when input size is small. How is this even possible ?
Upvotes: 0
Views: 365
Reputation: 159
The resolution of the standard C clock() function can vary heavily between systems and is likely too small to measure your algorithm. You have 2 options:
For 1) you can use QueryPerformanceCounter() and QueryPerformanceFrequency() if your program runs under windows or use clock_gettime() if it runs on Linux.
Refer to these pages for further details: QueryPerformanceCounter() clock_gettime()
For 2) you have to execute your algorithm a given number of times sequentially so that the time reported by clock() is several magnitudes above the minimum granularity of clock(). Lets say clock() only works in steps of 12 microseconds, then the time consumed by the total test run should be at least 1.2 milliseconds so your time measurement has at most 1% deviation. Otherwise, if you measure a time of 12 micros you never know if it ran for 12.0 micros or maybe 23.9 micros, but the next bigger clock() tick just didn't happen. The more often your algorithm executes sequentially inside the time measurement, the more exact your time measurement will be. Also be sure to copy-paste the call to your algorithm for sequential executions; if you just use a loop counter in a for-loop, this may severely influence your measurement!
Upvotes: 1
Reputation: 53809
The clock has some granularity, dependent on several factors such as your OS.
Therefore, it may happen that your algorithm runs that fast that the clock did not have time update. Hence the measured duration of 0.
You can try to run your algorithm n
times and divide the measured time by n
to get a better idea of the time taken on small inputs.
Upvotes: 3