Reputation: 2939
I am working on an open source package which uses sockets (TCP) to exchange data. On Linux, it can use either local Unix sockets or remote sockets. When I compare the performance of local sockets to remote over local loopback, I find Unix sockets to be 50x faster. Everything else is identical.
Is this performance difference to be expected, or does it indicate an error somewhere in the code?
Under most conditions data exchange is bi-directional and is usually something like a one-byte command (uint8_t
), to say what's happening, followed by a bunch of data, typically around 1kb.
Upvotes: 1
Views: 839
Reputation: 310957
Your protocol is practically certain to run into the Nagle algorithm if you send the initial byte separately. Use buffering, or writev()
, or sendmsg()
, to send it all at once.
Upvotes: 1