Serguei Fedorov
Serguei Fedorov

Reputation: 7913

socket.send appends packets together even after Tcpclient.NoDelay = true;

This is starting to frustrate me quite a bit because no matter what I do, my packets get shoved together making it annoying to have to differentiate between them on the other end (objects are easy, variable strings can be a pain). Is there a way to avoid this?

Here is a some crude code I am working with (yes, initialization happens here, this is rough test code so sorry about the poor programming conventions)

    sender.Connect("localhost", 8523);
    sender.Client.SendTimeout = 1000;
    sender.NoDelay = true;
    byte[] buffer = ASCIIEncoding.ASCII.GetBytes(message);
    sender.Client.Send(buffer);
    buffer = ASCIIEncoding.ASCII.GetBytes("DISCONNECT");
    sender.Client.Send(buffer);
    sender.Client.Close();

As you can see two sends happen one after the other and the socket decides that it would be optimal to send them together. That great, but I don't want that. Thus on the other end it ends up being

   helloDISCONNECT

I need the two to be received in separation as they are automatically loaded into a blocking queue and I don't want to deal with object sizes and string parsing.

Upvotes: 3

Views: 4482

Answers (4)

Doc
Doc

Reputation: 358

In addition to sender.NoDelay = true, the trick that definitively worked for me was the use of an additional delay after sender.Client.Send(...) in the form of Thread.Sleep(50).

The actual number was not researched, it just happened to work from the beginning (I guestimated the 50ms as a reasonable time to wait without creating a backlog in my send queue), so be sure to test and adjust this for your operating environment.

Digression: The application receiving my data is a CCTV recorder that can overlay text on the video so that each TCP packet is a different line on the screen. I was forced to do something about Nagle's algorithm or else my "lines" were shown "bunched-up" together (not to mention additional artifacts that had obviously something to do with the machine's use of buffers).

Upvotes: 1

Len Holgate
Len Holgate

Reputation: 21616

TCP presents a stream of bytes to the receiver.

Until the stream is closed, each read of a TCP socket can return between 1 and the size of the buffer that you have given it. You should code accordingly.

TCP does not know, or care, about your program's concept of message framing. It does not care that you send 3 "messages" with separate calls to send. A valid TCP implementation could return a single byte for each call to a function that reads from the stream or it could buffer up as much as it feels like before returning bytes to you. You should code accordingly.

If you want to deal in terms of messages when sending over TCP then you MUST apply your own framing and you MUST deal with that framing yourself when you read your messages. This generally means either a length prefix of a known size or a message terminator.

I deal with this in more detail here, and though the code is in C++ the concepts still apply.

Upvotes: 2

jhonkola
jhonkola

Reputation: 3435

If you want to be certain to receive messages separately on TCP with the constraints that you mentioned, you should do a shutdown for the connection after each message. If you are not expecting a reply, you can do a shutdown for both read and write, if you want a reply, do the shutdown for writing only. Doing a shutdown for writing will result in the receiver receiving 0 bytes and thus the receiver can detect end of message. For example, http works like this.

The tradeoff here is that you need to open a new TCP connection for each message which causes some overhead. However, in many applications this does not matter.

Disclaimer: This is easy to implement with BSD socket API. I have very little experience with C#, and don't know if it offers a suitable API for performing these low-level socket operations.

Upvotes: 0

payo
payo

Reputation: 4561

It appears you expect:

sender.NoDelay = true;

To essentially mean that:

sender.Client.Send(buffer);

...to be a blocking call until the bits are off your NIC. Unfortunately, this is not the case. NoDelay only tells the network layer to no wait, but the underlying call you are making with Send has some async behavior and your program's thread runs quickly enough that the network layer's thread does not wake up before you've essentially appended the second string.

Read more about NoDelay here. Read the remarks there for details.

Some quotes that are relevant:

Gets or sets a Boolean value that specifies whether the stream Socket is using the Nagle algorithm.

The Nagle algorithm is designed to reduce network traffic by causing the socket to buffer small packets and then combine and send them in one packet under certain circumstances

BESIDES, even if you could promise the bits to leave, the client's network layer may buffer them anyways before your client code even has a chance to see there were two "packets".

Upvotes: 1

Related Questions