zacharoni16
zacharoni16

Reputation: 307

Clock_Gettime() Jitter?

I am using clock_gettime() (from time.h) on Linux 2.6 to control timing in my thread loop. I need 500mS within +/- 5mS timing. It seems to be giving me 500mS for a while then starts drifting or jittering to +/- 30mS:

enter image description here

I am using the CLOCK_REALTIME call with it. Is there any way to improve the deviation it is having? I'm simply counting every mS with it and once the counter hits 500 fire off an interrupt.

This is also within the QT 4.3 Framework. The QTimer seemed even more jittery than this.

Upvotes: 1

Views: 481

Answers (1)

paddy
paddy

Reputation: 63471

Based on the wording of your question, I have a feeling you might be accumulating your time differences incorrectly.

Try this approach:

#include <stdio.h>
#include <time.h>

long elapsed_milli( struct timespec * t1, struct timespec *t2 )
{
    return (long)(t2->tv_sec - t1->tv_sec) * 1000L
         + (t2->tv_nsec - t1->tv_nsec) / 1000000L;
}

int main()
{
    const long period_milli = 500;
    struct timespec ts_last;
    struct timespec ts_next;
    const struct timespec ts_sleep = { 0, 1000000L };

    clock_gettime( CLOCK_REALTIME, &ts_last );

    while( 1 )
    {
        nanosleep( &ts_sleep, NULL );
        clock_gettime( CLOCK_REALTIME, &ts_next );
        long elapsed = elapsed_milli( &ts_last, &ts_next );

        if( elapsed >= period_milli )
        {
            printf( "Elapsed : %ld\n", elapsed );

            ts_last.tv_nsec += period_milli * 1000000L;
            if( ts_last.tv_nsec >= 1000000000L )
            {
                ts_last.tv_nsec -= 1000000000L;
                ts_last.tv_sec++;
            }
        }
    }
    return 0;
}

Every time the required period has elapsed, the "previous" time is updated to use the expected time at which that period elapsed, rather than the actual time. This example uses a 1ms sleep between each poll, which might be over the top.

Upvotes: 1

Related Questions