Alex Belous
Alex Belous

Reputation: 318

windows timing drift of performancecounter C++

I am using QueryPerformanceCounter() function for measuring time in my application. This function also used in order to supply time line of my application life. Recently I have noticed that there is a drift of the time regarding other time-functions. Finally, I have written a small test to check whether the drift is real or not. (I use VS2013 compiler)

    #include <Windows.h>
    #include <chrono>
    #include <thread>
    #include <cstdio>

    static LARGE_INTEGER s_freq;

    using namespace std::chrono;

    inline double now_WinPerfCounter()
    {
        LARGE_INTEGER tt;
        if (TRUE != ::QueryPerformanceCounter(&tt))
        {
            printf("Error in QueryPerformanceCounter() - Err=%d\n", GetLastError());
            return -1.;
        }
        return (double)tt.QuadPart / s_freq.QuadPart;
    }

   inline double now_WinTick64()
   {
        return (double)GetTickCount64() / 1000.;
   }

   inline double now_WinFileTime()
   {
        FILETIME ft;
        ::GetSystemTimeAsFileTime(&ft);

        long long * pVal = reinterpret_cast<long long *>(&ft);

        return (double)(*pVal) / 10000000. - 11644473600LL;
   }


    int _tmain(int argc, _TCHAR* argv[])
    {
        if (TRUE != ::QueryPerformanceFrequency(&s_freq))
        {
            printf("Error in QueryPerformanceFrequency() - Err=#d\n", GetLastError());
            return -1;
        }

        // save all timetags at the beginning
        double t1_0 = now_WinPerfCounter();
        double t2_0 = now_WinTick64();
        double t3_0 = now_WinFileTime();
        steady_clock::time_point t4_0 = steady_clock::now();

        for (int i = 0;; ++i)   // forever
        {
            double t1 = now_WinPerfCounter();
            double t2 = now_WinTick64();
            double t3 = now_WinFileTime();
            steady_clock::time_point t4 = steady_clock::now();

            printf("%03d\t %.3lf %.3lf %.3lf %.3lf \n",
            i,
            t1 - t1_0,
            t2 - t2_0,
            t3 - t3_0,
            duration_cast<nanoseconds>(t4 - t4_0).count() * 1.e-9
            );

            std::this_thread::sleep_for(std::chrono::seconds(10));
        }
    return 0;
   }

The output was, confusing :

    000      0.000 0.000 0.000 0.000
    ...
    001      10.001 10.000 10.002 10.002
    ...
    015      150.006 150.010 150.010 150.010
    ...
    024      240.009 240.007 240.015 240.015
    ...
    025      250.010 250.007 250.015 250.015
    ...
    026      260.010 260.007 260.016 260.016
    ...
    070      700.027 700.039 700.041 700.041

Why is there a difference ? It looks like one second duration is not same when using different API functions ? Also , during a day the difference is not constant ...

Upvotes: 2

Views: 1685

Answers (5)

ddbug
ddbug

Reputation: 1539

You may want to look at GetSystemTimePreciseAsFileTime API. It is both high-precision and can adjust periodically from external time servers.

Upvotes: 0

Arno
Arno

Reputation: 5194

Edit: The main reason for the drift you observe is an inaccuracy of the performance counter frequency. The system provides you with a constant value returned by QueryPerformanceFrequency. This constant frequency is only near the true frequency. It is accompanied by an offset and possible drift. Modern Platforms (Windows > 7, invariant TSC) have little offset and little drift.

Example: A 5 ppm offset would cause the time to move forward by 5 us/s or 432 ms/day faster than expected if you use the performance counter frequency to scale performance counter values to times.

General: QueryPerformanceCounterand GetSystemTimeAsFileTime are using different resources (hardware). Modern platforms derive QueryPerformanceCounter from the CPUs timestamp counter (TSC) and GetSystemTimeAsFileTime uses PIT timer, ACPI PM timer, or HPET hardware. See Intel 64® and IA-32 Architectures Software Developer's Manual, Volume 3B: System Programming Guide, Part 2 for details. It is unavoidable to deal with drift when the two APIs are using different hardware. However, you may extend your test code to calibrate the drift. Frequency generating hardware is typically temperature sensitive. Therefore, the drift may vary, depending on the load. See The Windows Timestamp Project for details.

Upvotes: 2

Hans Passant
Hans Passant

Reputation: 941515

This is normal, affordable clocks never have infinite precision. GetSystemTime() and GetTickCount() are periodically re-calibrated from an Internet time server, time.windows.com by default. Which uses an unaffordable atomic clock to keep time. Adjustments to catch up or slow down are gradual in order to avoid giving software a heart-attack. The time they report will be off from the Earth's rotation by at most a few seconds, the periodic re-calibration limits long-term drift error.

But QPF is not calibrated, its resolution is far too high to allow these kind of adjustments to not affect the short interval measurements it is normally used for. It is derived from a frequency source available on the chipset, picked by the system builder. Subject to typical electronic part tolerances, temperature in particular affects accuracy and causes variable drift over longer periods. QPF is therefore only suitable to measure relatively short intervals.

Inevitably both clocks can't be the same and you will see them drift apart. You'll have to pick one or the other to be the "master" clock. If long term accuracy is important then you'll have to pick the calibrated source, if high resolution but not accuracy is important then QPF. If both are important then you need custom hardware, like a GPS clock.

Upvotes: 3

Phil Lello
Phil Lello

Reputation: 8639

It's probably a bad idea to use multiple methods of the measurement; the different functions have different resolutions, you'll effectively be seeing noise from values getting rounded to their precision. And of course, a small amount of time will pass while setting t1..t4 as the functions take time to execute, so drift is unavoidable.

Upvotes: 0

Kurokawa Masato
Kurokawa Masato

Reputation: 136

Probably, different functions measure time in different ways.

QueryPerformanceCounter - is a high-resolution time stamps or measure time intervals.

GetSystemTimeAsFileTime - retrieves the current system date and time in UTC format.

So this is not quite correct to use this functions and compare time from them.

Upvotes: 0

Related Questions