Reputation:
I'm trying to figure out a way to write some code that could accurately figure out the time required to perform a search on a BST. Currently im using time and the total number of elements is of the order 10^5. It looks something like the following :-
clock_t begin, end;
begin = clock();
...
...
...
end = clock();
difference = (end-begin)/CLOCKS_PER_SECOND;
However, this does not give me the precision that I'm looking for. Are there any other libc functions that I could use?
Upvotes: 14
Views: 15572
Reputation: 445
If you are running the benchmark with an Intel CPU. Maybe you can give a try to the RDTSC and RDTSCP instructions.
here is a document about the instructions
Upvotes: 3
Reputation: 7225
What you are doing is quite similar to what I've been recently doing.
I do think the int gettimeofday(struct timeval *tv, struct timezone *tz);
function suits your need. The time info would be put into the struct timeval tv
, which gets the time in seconds and microseconds. The struct timeval
from the man page:
struct timeval {
time_t tv_sec; /* seconds */
suseconds_t tv_usec; /* microseconds */
};
A short example for time measurement with gettimeofday
:
struct timeval time;
if(gettimeofday( &time, 0 )) return -1;
long cur_time = 1000000 * time.tv_sec + time.tv_usec;
double sec = cur_time / 1000000.0;
A longer example have been simplified and easily wrapped as a c++ class for convenient use. The code has been put on my github: https://github.com/lulyon/LinuxTimeCounter, which is use in a real world project.
Upvotes: 0
Reputation: 25908
If your library supports it, C11 has timespec_get()
which will measure up to nanoseconds, depending on your system clock resolution.
Upvotes: 5
Reputation: 49279
Qt has QElapsedTimer
which supports measuring up to nanoseconds. I cannot testify to how accurate it is, IIRC it uses different implementations on different platforms. Sadly it is C++, which may not suit you. Also:
On platforms that do not provide nanosecond resolution, the value returned will be the best estimate available.
The clock()
function is alright for rough measurements, but it works in the millisecond range. Contrary to its name, I don't think it measures in CPU clocks, since modern processors clock frequencies can vary quite a bit, making it impossible to determine the actual time accurately solely relying on CPU clocks. IMO this concept dates back to the days when CPU frequencies were constant, there was no power management, no "turbo boost" automatic overclocking or whatsoever.
EDIT: Also found this (time.h):
int clock_gettime(clockid_t clk_id, struct timespec *tp);
... and the target struct...
struct timespec {
time_t tv_sec; /* seconds */
long tv_nsec; /* nanoseconds */
};
... and the clock options...
CLOCK_REALTIME
System-wide realtime clock. Setting this clock requires appropriate privileges.
CLOCK_MONOTONIC
Clock that cannot be set and represents monotonic time since some unspecified starting point.
CLOCK_PROCESS_CPUTIME_ID
High-resolution per-process timer from the CPU.
CLOCK_THREAD_CPUTIME_ID
Thread-specific CPU-time clock.
Upvotes: 1
Reputation: 133859
To benchmark your algorithm, you need to do some repetitions to get into the range of at least hundreds of milliseconds. (This is standard practice). To benchmark an algorithm that happens in user space only (no threading, system calls etc), you'd want to use getrusage(RUSAGE_SELF, &r)
and use r.ru_utime
value that contains seconds and microseconds.
Upvotes: 3
Reputation: 2457
BST? What kind of precision do you want? Dividing by CLOCKS_PER_SECOND being 10^6 on a 32-bit system should give a 6-digit precision?
Do you cast the result tot a double?
try
difference = (double)(end-begin)/CLOCKS_PER_SECOND;
Note that difference should be able to hold a double.
Upvotes: 0