Brett
Brett

Reputation: 4166

Linux/OSX Clock Resolution with millisecond accuracy?

On Windows, I can call QueryPerformanceCounter to get high resolution data points, but this method call is affected by issues with the BIOS, multi-core CPUs, and some AMD chips. I can call timeBeginPeriod to increase the system clock resolution in Windows down to 1ms (instead of the standard ~15ms) which means that I can use just call timeGetTime and get the time in the clock resolution that I've specified.

So! On OSX/Linux, what C++ clock resolutions should I expect? Can I get 1ms resolution similar to Windows? Since I'm doing real time media, I want this clock resolution to be as low as possible: can I change this value in the kernel (like in Windows with timeBeginPeriod)? This is a high performance application, so I want getting the current time to be a fast function call. And I'd like to know if the clock generally drifts or what weird problems I can expect.

Thanks! Brett

Upvotes: 2

Views: 3095

Answers (2)

cmannett85
cmannett85

Reputation: 22366

To add to David's answer, if you can't use C++11, Boost's Timer classes can help you.

Upvotes: 2

David Brown
David Brown

Reputation: 13526

If you are using C++11 you can use std::chrono::high_resolution_clock which should give you as high a resolution clock as the system offers. To get a millisecond duration you would do

typedef std::chrono::high_resolution_clock my_clock;

my_clock::time_point start = my_clock::now();

// Do stuff

my_clock::time_point end = my_clock::now();

std::chrono::milliseconds ms_duration = 
    std::chrono::duration_cast<std::chrono::milliseconds>(end - start);

If you aren't using C++11 then the gettimeofday function works in OSX and most linux distributions. It gives you the time since epoch in seconds and microseconds. The resolution is unspecified, but it should give you at least millisecond accuracy on any modern system.

Upvotes: 9

Related Questions