Lunar Mushrooms
Lunar Mushrooms

Reputation: 8918

Unix domain socket : Make Latency constant

Issue summary: AF_UNIX stable sending, bursty receiving.

I have an application B that receives data over unix domain datagram socket. There is peer application A that sends data to it. Both A and B are running continuously (and are SCHED_FIFO). My application A also prints the time of reception.

The peer application B can send data at varying timings (varying in terms of milliseconds only). Ideally (what I expect) the packet send delay should exactly match with reception delay. For example:

A sends in time            :  5ms     10ms      15ms     21ms   30ms   36ms
B should receive in time   :  5+x ms  10+x ms   15+x ms  21+x ms ... 

Where x is a constant delay.

But when I experimented what I observe in B is :

A sends in time            :  5ms     10ms      15ms     21ms   30ms   36ms
B received in time         :  5+w ms  10+x ms   15+y ms  21+z ms ... 

(w,x,y,z are different constant delays). So I cannot predict reception time when sending time is given).

Is it because some buffering is involved in unix domain socket ? Please suggest some workaround for the issue so that the reception time is predicable from send time. I need 1 millisecond accuracy.

(I am using vanilla Linux 3.0 kernel)

Upvotes: 4

Views: 2001

Answers (1)

John Zwinck
John Zwinck

Reputation: 249123

As you are using blocking recv(), when no datagram is available your program will be unscheduled. This is bad for your use case--you want your program to stay hot. So make your recv() non-blocking, and handle EAGAIN by simply busy waiting. This will consume 100% of one core, but I think you'll find it helps you achieve your goal.

Upvotes: 1

Related Questions