Reputation: 4678
So I've been looking at socket programming and reading up on ways to create high performance servers.
Reading about using the AcceptAsync
method of Socket
and how sometimes the calls may execute syncronously (and return false) if connections are immediately available, and how to put the socket back into a listening state after accepting a connection, I started to wonder if that meant that really, although a socket accepting connections using AcceptAsync
would "wait" asynchronously, ultimately it would only process 1 new connection at a time, even if 1000 clients connected at once, and depending on how quickly clients could be passed off to another thread for processing, may cap the performance.
This then lead me to wonder if I could even write my class with this in mind, using a single instance of SocketAsyncEventArgs
for all connections, rather than a object pool that I had originally intended.
Thinking more about this, I wondered in that case what would happen if multiple AcceptAsync
calls were made while no connections were available. So I made this very very rough test application to see what would happen:
class Program
{
private static Socket s;
static void Main(string[] args)
{
s = new Socket(IPAddress.Any.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
s.Bind(new IPEndPoint(IPAddress.Any, 10001));
s.Listen(10);
var e = new SocketAsyncEventArgs();
e.Completed += EOnCompleted;
s.AcceptAsync(e);
e = new SocketAsyncEventArgs();
e.Completed += EOnCompleted;
s.AcceptAsync(e);
e = new SocketAsyncEventArgs();
e.Completed += EOnCompleted;
s.AcceptAsync(e);
e = new SocketAsyncEventArgs();
e.Completed += EOnCompleted;
s.AcceptAsync(e);
while (true)
{
}
}
private static void EOnCompleted(object sender, SocketAsyncEventArgs socketAsyncEventArgs)
{
Console.WriteLine(socketAsyncEventArgs.AcceptSocket.RemoteEndPoint);
Thread.Sleep(10000);
var e = new SocketAsyncEventArgs();
e.Completed += EOnCompleted;
s.AcceptAsync(e);
}
}
I used a tcp testing application to spam the application with connections and the result showed that the multiple AcceptAsync
calls created multiple Accept/Complete cycles that seemed to execute in parallel.
My question is, does this actually have any benefit? Is it a legitimate way to allow faster processing of incoming connections? Are there any drawbacks?
EDIT 1: Looking into this further it seems like it's technically possible that if data is always available for processing for async calls that they'll all run synchronously and never return to accept a new connection, or at least, there could be a significant delay.
How can you produce a high performance server if this is indeed the case?
Upvotes: 2
Views: 1748
Reputation: 171246
Normally, all that is needed is this code:
while (true) {
var socket = listener.Accept();
Task.Run(async () => await RunConnectionAsync(socket));
}
The user-mode portion of this code will run very quickly. Almost all time will be spent in the kernel. This will accept connections very quickly. No connections will be dropped due to the kernel-managed backlog.
There's rarely a need to have multiple outstanding accept calls to reach your perf goals. If you do need that simply run this same code on multiple threads.
The SocketAsyncEventArgs
API borders on being obsolete. Task-based IO is far more convenient and just a little more CPU intensive.
In particular, if you create a new SocketAsyncEventArgs
for each operation you lose all the benefits of that pattern. That's pointless.
You don't need async IO for accepting from sockets in almost all cases. It costs more CPU time to execute and has slightly more latency therefore.
Upvotes: 2