Matt
Matt

Reputation: 967

How do I get the most accurate realtime periodic interrupts in Linux?

I want to be interrupted at frequencies that are powers of ten, so enabling interrupts from /dev/rtc isn't ideal. I'd like to sleep 1 millisecond or 250 microseconds between interrupts.

Enabling periodic interrupts from /dev/hpet works pretty well, but it doesn't seem to work on some machines. Obviously I can't use it on machines that don't actually have a HPET. But I can't get it working on some machines that have hpet available as a clocksource either. For example, on a Core 2 Quad, the example program included in the kernel documentation fails at HPET_IE_ON when set to poll.

It would be nicer to use the itimer interface provided by Linux instead of interfacing with the hardware device driver directly. And on some systems, the itimer provides periodic interrupts that are more stable over time. That is, since the hpet can't interrupt at exactly the frequency I want, the interrupts start to drift from wall time. But I'm seeing some systems sleep way longer (10+ milliseconds) than they should using an itimer.

Here's a test program using itimer for interrupts. On some systems it will only print out one warning that's it slept about 100 microseconds or so over the target time. On others, it will print out batches of warning that it slept 10+ milliseconds over the target time. Compile with -lrt and run with sudo chrt -f 50 [name]

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <error.h>
#include <errno.h>
#include <sys/ioctl.h>
#include <sys/types.h>
#include <sys/time.h>
#include <time.h>
#include <signal.h>
#include <fcntl.h>
#define NS_PER_SECOND 1000000000LL
#define TIMESPEC_TO_NS( aTime ) ( ( NS_PER_SECOND * ( ( long long int ) aTime.tv_sec ) ) \
    + aTime.tv_nsec )

int main()
{
    // Block alarm signal, will be waited on explicitly
    sigset_t lAlarm;
    sigemptyset( &lAlarm );
    sigaddset( &lAlarm, SIGALRM  );
    sigprocmask( SIG_BLOCK, &lAlarm, NULL );

    // Set up periodic interrupt timer
    struct itimerval lTimer;
    int lReceivedSignal = 0;

    lTimer.it_value.tv_sec = 0;
    lTimer.it_value.tv_usec = 250;
    lTimer.it_interval = lTimer.it_value;

    // Start timer
    if ( setitimer( ITIMER_REAL, &lTimer, NULL ) != 0 )
    {
        error( EXIT_FAILURE, errno, "Could not start interval timer" );
    }
    struct timespec lLastTime;
    struct timespec lCurrentTime;
    clock_gettime( CLOCK_REALTIME, &lLastTime );
    while ( 1 )
    {
        //Periodic wait
        if ( sigwait( &lAlarm, &lReceivedSignal ) != 0 )
        {
            error( EXIT_FAILURE, errno, "Failed to wait for next clock tick" );
        }
        clock_gettime( CLOCK_REALTIME, &lCurrentTime );
        long long int lDifference = 
            ( TIMESPEC_TO_NS( lCurrentTime ) - TIMESPEC_TO_NS( lLastTime ) );
        if ( lDifference  > 300000 )
        {
            fprintf( stderr, "Waited too long: %lld\n", lDifference  );
        }
        lLastTime = lCurrentTime;
    }
    return 0;
}

Upvotes: 11

Views: 15807

Answers (2)

Rakis
Rakis

Reputation: 7864

Regardless of the timing mechanism you use, it boils down to a combination of change in your task's run status, when the kernel scheduler is invoked (usually 100 or 1000 times per second), and cpu contention with other processes.

The mechanism I've found that achieves the "best" timing on Linux (Windows as well) is to do the following:

  1. Place the process on a Shielded CPU
  2. Have the process initially sleep for 1ms. If on a shielded CPU, your process should wake right on the OS scheduler's tick boundary
  3. Use either the RDTSC directly or CLOCK_MONOTONIC to capture the current time. Use this as the zero time for calculating the absolute wakeup times for all future periods. This will help minimize drift over time. It can't be eliminated completely since hardware timekeeping fluctuates over time (thermal issues and the like) but it's a pretty good start.
  4. Create a sleep function that sleeps 1ms short of the target absolute wakeup time (as that's the most accurate the OS scheduler can be) then burn the CPU in a tight loop continually checking the RDTSC/CLOCK_REALTIME value.

It takes some work but you can get pretty good results using this approach. A related question you may want to take a look at can be found here.

Upvotes: 5

SzG
SzG

Reputation: 12619

I've had the same problem with a bare setitimer() setup. The problem is that your process is scheduled by the SCHED_OTHER policy on static priority level 0 by default. This means you're in a pool with all other processes, and dynamic priorities decide. The moment there is some system load, you get latencies.

The solution is to use the sched_setscheduler() system call, increase your static priority to at least one, and specify SCHED_FIFO policy. It causes a dramatic improvement.

#include <sched.h>
...
int main(int argc, char *argv[])
{
    ...
    struct sched_param schedp;
    schedp.sched_priority = 1;
    sched_setscheduler(0, SCHED_FIFO, &schedp);
    ...
}

You have to run as root to be able to do this. The alternative is to use the chrt program to do the same, but you must know the PID of your RT process.

sudo chrt -f -p 1 <pid>

See my blog post about it here.

Upvotes: 8

Related Questions