JBL
JBL

Reputation: 12907

Enforcing real-time constraint in multi-threaded context

I'm currently developing a GUI in C++ for a program that must poll, process and then display data (with a plot) in real time.

The part I'm struggling with is the code, running in a separate thread, that actually polls the data from some external hardware and then process it. I want the function that does this work to be called periodically with a fixed period (i.e. 1/20 second between calls).

I don't really know if it's possible and how to enforce the fact that the function must be called periodically, exactly 20 times per second...

After reading a bit about real-time programming, and based on what I learned notably with game development and the concept of the main game loop, my first approach would be to use a loop which would adjust the execution time based on how much time the polling + processing took:

 while(running){
     //Let's assume this function get the time elapsed since
     //the program started
     int start = get_current_time_millisecond();

     //Retrieve the data from the hardware
     pollData();

     //Process the data retrieved
     processData();

     //Queue the data wherever it is needed:
     //- plotting widget of the GUI
     //- recording object
     dispatchProcessedData();

     int elapsed = get_current_time_millisecond() - start;
     int remaining = 50 - elapsed;

     if(remaining > 0){
         sleep(remaining);
     }
 }

but this seems flawed, as it may cause a drifting problem, if the time elapsed during one iteration is greater than the period I want to stick to.

This could come from the fact that the computations takes too much time (I highly doubt it, but I'm quite not experienced enough to be sure, and profiling could help eliminate this problem, or at least establish that the code may take to much time), but I also wondered if the fact that I run multiple thread could lead to the same issue due to thread scheduling (again, I'm quite new to multi-threading, I could be completely wrong).

Therefore, I'd like to ask :

(I apologize if by any chance I missed obvious/easy to find documentation on this topic)

Thanks !

Upvotes: 4

Views: 3829

Answers (2)

Matthias
Matthias

Reputation: 8180

As @Mark suggested, the best would be to use an real-time OS.

If a kind of "soft" RT is good engough, you should make a thread with a high priority (e.g. use scheduling class real-time FIFO in Linux or something like that) and set up a cylic timer, that wakes up your thread, e.g. by a semaphore.

In addition, it is a good idea to decouple ploting from processing.

Upvotes: 1

syam
syam

Reputation: 15089

It is very hard to enforce such a constraint except if you have an external, reliable source of interruptions.

I believe the closest thing you can do on a consumer OS is:

  • Ensure you're using a real-time kernel (eg. using Linux's RT patch) to minimize the timing variations.
  • Set your polling thread to the highest possible priority.
  • Only poll and dispatch in your polling thread, leave any processing to a lower-priority thread so that computations don't affect your polling.
  • Use high precision timers (on Linux you can have nanoseconds instead of milliseconds) to reduce the error margin.
  • Use a lock-free queue to communicate between your polling and processing threads so that you don't have to pay the cost of a mutex in the polling thread (but with only 20 samples per second this is probably irrelevant).

At least that's what we did for our product, which polls at 100Hz (10ms) on a 400 MHz CPU. You'll never completely get rid of drifting this way but it is quite minimal.

Upvotes: 3

Related Questions