user815512
user815512

Reputation: 194

How will a c# tcp socket block if I send too many bytes?

I am writing a tcp server which needs to send data to connected remote hosts. I'd prefer socket send calls to not block at all, ever. To facilitate this I am using Socket.Select to identify writable sockets and writing to those sockets using Socket.Send. The Socket.Select msdn article states:

If you already have a connection established, writability means that all send operations will succeed without blocking.

I am concerned about the case where the remote socket is not actively draining the buffer, said buffer fills, and tcp pushes back onto my server's socket. In this case I thoguht the server will not be able to send and the accepted socket's buffer will fill.

I wanted to know the behavior of socket.Send when the send buffer is partially full in this case. I hope it would accept as many bytes as it can and return that many bytes. I wrote a snippet to test this, but something strange happened: it always sends all the bytes I give it!

snippet:

var LocalEndPoint = new IPEndPoint(IPAddress.Any, 22790);
var listener = CreateSocket();
listener.Bind(LocalEndPoint);
listener.Listen(100);
Console.WriteLine("begun listening...");

var receiver = CreateSocket();
receiver.Connect(new DnsEndPoint("localhost", 22790));
Console.WriteLine("connected.");

Thread.Sleep(100);
var remoteToReceiver = listener.Accept();
Console.WriteLine("connection accepted {0} receive size {1} send size.", remoteToReceiver.ReceiveBufferSize, remoteToReceiver.SendBufferSize);

var stopwatch = Stopwatch.StartNew();
var bytes = new byte[] {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16};
var bytesSent = remoteToReceiver.Send(bytes, 0, 16, SocketFlags.None);
stopwatch.Stop();
Console.WriteLine("sent {0} bytes in {1}", bytesSent, stopwatch.ElapsedMilliseconds);

public void CreateSocket()
{
    var socket = new Socket(SocketType.Stream, ProtocolType.Tcp)
    {
        ReceiveBufferSize = 4,
        SendBufferSize = 8,
        NoDelay = true,
    };
    return socket;
}

I listen to an endpoint, accept the new connection, never drain the receiver socket buffer, and send more data than can be received by the buffer alone, yet my output is:

begun listening...
connected.
connection accepted, 4 receive size 8 send size.
sent 16 bytes in 0

So somehow socket.Send is managing to send more bytes than it is setup for. Can anyone explain this behavior?

I additionally added a receive call and the socket does end up receiving all the bytes sent.

Upvotes: 1

Views: 818

Answers (1)

usr
usr

Reputation: 171178

I believe the Windows TCP stack always takes all bytes because all Windows apps assume it does. I know that on Linux it does not do that. Anyway, the poll/select style is obsolete. Async sockets are done preferably with await. Next best option is the APM pattern.

No threads are in use while the IO is in progress. This is what you mean/want. This goes for all the IO-related APM, EAP and TAP APIs in the .NET Framework. The callbacks are queued to the thread-pool. You can't do better than that. .NET IO is efficient, don't worry about it much. Spawning a new thread for an IO would frankly be stupid.

Async network IO is actually super simple when you use await and follow best-practices. Efficiency-wise all async IO techniques in .NET are comparably efficient.

Upvotes: 2

Related Questions