Reputation: 707
I'm looking at writing a application that need to be able to handle in the region of 200 connections / sec and was wondering if C# and .NET will handle this or if I need to really be looking at C++ to do this?
It looks like SocketAsyncEventArgs may be the way to go but thought id check before I plough in to it.
Each transaction should only last less than a second but could take up to 15 seconds each if that makes any difference.
Upvotes: 1
Views: 1118
Reputation: 21616
I've written about this kind of thing here.
As long as you can design your system to avoid accumulating sockets in TIME_WAIT
on the server you should be OK from an OS point of view on anything VISTA or later. You can avoid TIME_WAIT
on the server either by closing the socket on the server with linger turned off so that the TCP stack sends an RST
rather than a FIN
OR by having the client issue the active close and therefore going into TIME_WAIT
rather than the server.
You need to think about the speed of your accepts, so use async accept calls and post a lot of them, also make the listen backlog quite large.
If you need some code to test your server, I have a 'multi client test harness' that's free to download from here. It uses ConnectEx()
to issue a configurable number of async connects simultaneously at a rate that is controllable by the command line. I talk about it's usage a bit more here.
If you're thinking of using C++ rather than C# then I have some example servers available here, and a set of free I/O completion port based code available here.
THE most important thing, IMHO, is to make sure you are testing for your desired level of scalability from Day 0.
Good luck!
Upvotes: 2
Reputation: 456507
You'll run into operating system limits if you try to maintain that level of usage. Your ephemeral ports will expire pretty quickly.
If you just need to handle bursts of large numbers of connections, then .NET should be as capable of handling it as C++; you may need to adjust the backlog
parameter to Listen
.
Upvotes: 3