Reputation: 68
Note: TCP_NODELAY didn't seem to solve the issue described below.
The Main issue: TCP sockets have bad latency (100ms+) unless they're streaming data constantly. If I send a message normally, the latency is 100ms or worse. I tried sending garbage data at regular intervals (every 0.05 seconds) and putting my actual messages in between, and the latency improved immensely.
The Question: Why do TCP messages have a bad latency when they're sent alone, but they have good latency when the socket is streaming large amounts of data?
Some other things:
I tried Datagrams (UDP) and I still got a bad latency.
My lan ping is pretty bad between computers, which may be related. I've got an Apple Aiport Express. I've also tried creating wireless networks directly between my Macs, but I still get a bad ping. This may or may not be related. I don't know if it's just the result of the ping command having a really low priority or not. It seems to be consistent with the socket latency though.
Upvotes: 4
Views: 691
Reputation: 36494
This may be caused by power-saving features in the wireless network adapter: if the link is quiet, an adapter configured to reduce power usage will power off the radio most of the time, only turning it back on once every 100 milliseconds (or whatever your network's beacon interval is configured as; 100 ms is a common default) to check the beacon message for a flag indicating whether there are any packets for it to receive.
This hypothesis could be tested by modifying your access point's beacon interval, comparing against results over a wired connection and/or disabling all power saving features of the wireless adapters; if your machines have no wired network ports then consider testing with USB-to-wired adapters.
Upvotes: 2