Reputation: 801
I'm am trying to squeeze as much out of my UDP data collection socket as possible using the following code:
public void Start()
{
if (socket == null) Open();
this.stats.PositionServerStarted = DateTime.Now;
this.state = SocketWorkerState.Running;
AsyncBeginReceive();
}
private void AsyncBeginReceive()
{
DataPacket dataPacket = new DataPacket(socket, this.settings.IPAddress,
this.settings.Port, SocketWorker.MaxDataRead);
Interlocked.Increment(ref stats.ProcessingThreads);
socket.BeginReceiveFrom(dataPacket.BytePacket, 0,
SocketWorker.MaxDataRead, SocketFlags.None,
ref dataPacket.RemoteEndPoint, new AsyncCallback(AsyncEndReceive),
dataPacket);
}
private void AsyncEndReceive(IAsyncResult ar)
{
DataPacket dataPacket= null;
AsyncBeginReceive();
dataPacket = (DataPacket)ar.AsyncState;
dataPacket.EnteredQueue = DateTime.Now;
dataPacket.DataLength= dataPacket.SourceSocket.EndReceiveFrom(ar,
ref dataPacket.RemoteEndPoint);
Interlocked.Decrement(ref stats.ProcessingThreads);
Interlocked.Increment(ref this.stats.PacketsQueued);
ThreadPool.QueueUserWorkItem(new WaitCallback(OnPacketArrived),
dataPacket);
Interlocked.Decrement(ref this.stats.PacketsQueued);
}
private void OnPacketArrived(object packet)
{
//go ahead and process the packet
}
Does anyone have any ideas on how to improve this? Is calling AsyncBeginReceive()
from AsyncEndReceive()
considered best practice?
I've tweaked the threadpool to
ThreadPool.SetMinThreads(20,20);
ThreadPool.SetMaxThreads(250, 250);
But if I'm honest this doesn't appear to have much effect on the performance.
Upvotes: 4
Views: 3798
Reputation: 36494
Optimize only if what you have isn't working well enough, and if you must optimize, do so based on metrics -- try different approaches and use what works best.
That said, I'm not convinced that this approach is any more effective than simply having n + 1 threads in a loop that blocks in ReceiveFrom
then completes each packet's processing synchronously, avoiding the use of both async I/O and inter-thread queuing. Set n to the count of bottleneck devices (typically processor cores, but spindles may be your limiting resource as well if you're heavily I/O bound, or perhaps both); The "+ 1" part is just to avoid idling a device under load when a thread is doing something else.
I've had some very painful experiences with both ThreadPool
and BeginXxx
-based async I/O in .Net (both full and compact frameworks) so I don't in general recommend their use. Then again, much of the pain (from async I/O at least) may have been due to others' crummy code; ThreadPool
just isn't reliable on compact framework. :-)
Upvotes: 0
Reputation: 21616
Personally I'd post multiple recvs when you start up, that is call AsyncBeginReceive()
a tunable number of times. That way you don't have to wait for one recv to complete before the next can begin. In unmanaged terms it's generally a good idea to always have an async read pending so that the I/O subsystem can read straight into your buffers if possible - this may or may not translate into higher performance but it's quite likely to translate into fewer lost datagrams.
You will also probably want to increase the socket's recv buffer size as this will also help prevent dropped datagrams.
I doubt you'll have to tweak the thread pool settings. In unmanaged code you can deal with a fully loaded network pipe with < 5 or so threads using async I/O; that 250 of yours is way off the mark.
Upvotes: 1
Reputation: 12314
I don't think async is a good way to achieve high performance. Read manually and ask for larger chunks of data in each pass. If you have a low number of concurrent connections you can even dedicate threads to them so the sockets have your full attention. Async operations has a lot of overhead compared to a direct aproach.
If you need really high throughput consider setting the thread priority. Also note that the .Net priority is only relative, so you may want to set process priority too.
Optimization settings in project properties can have a noticeable impact on performance if you are looking at high throughput speeds.
Upvotes: 0