Reputation: 24067
I've figured out that default ReceiveBufferSize (8192) doesn't work for me - I lose packet Sockets: sometimes (rarely) packets are lost during receiving
However msdn has warning about increasing BufferSize:
http://msdn.microsoft.com/ru-ru/library/system.net.sockets.socket.receivebuffersize.aspx A larger buffer size potentially reduces the number of empty acknowledgements (TCP packets with no data portion), but might also delay the recognition of connection difficulties. Consider increasing the buffer size if you are transferring large files, or you are using a high bandwidth, high latency connection (such as a satellite broadband provider.)
I know that I need to receive ~2000 packets per second and each datagram contains about 50-100 bytes of data. I'm using low-latency connection (everything in lan). I'm receiving stock exchange data from udp multicast so sometimes traffic my significantly jump (lehman brothers bankruptcy etc.) but let's assume that I will not receive more than 20 000 packets per second.
How to calculate ReceiveBufferSize that will fit my needs?
I absolutely not agree to lose packets, but I do not lose perfomance or reliability or anything else as well.
Upvotes: 5
Views: 6793
Reputation: 13233
As you're using UDP, increasing the receive buffer size does not adversely affect detection of connection issues.
Size the buffer according to how long you need to be able to buffer incoming data without your software reading them from the socket - and according to how much memory is available.
You can't control the rate at which data arrives. Also, among others, the Garbage Collector might interrupt your processing. So you're unable to control processing rate reliably, too. Thus you will have to make tests and/or assumptions and adapt when necessary. If memory is readily available you can choose to have more of a safety buffer by increasing the receive buffer size.
Upvotes: 0
Reputation: 1672
There is no ideal figure
one can give you for ReceiveBufferSize based on the info you have provided.
Apart from your worse-case incoming data rate
, it depends on the rate at which you process messages
and whether you can continue to do so when you are being bombarded with messages.
As mentioned in the comment to you question, you should profile you application under stressful conditions to discover the best value that works all the time
for you.
Alternatively, you could start with the default value of ReceiveBufferSize and increase it when you cross a threshold where you expect to lose packets bringing it down again when the situation normalizes. This approach is a lot of work.
EDITED:
On an average, you will be processing data faster that you receive it. But, there will be situations where you may not be able to process data at all
(example, you may have to write to a database synchronously or serially every 100 messages you receive). In that case, you will have to ensure that your buffer can handle data coming in at that moment.
Since, in the worse case you can receive up to 20k packets per second each of size 100 bytes, if you set the buffer size to say 2 MB (20k * 100 bytes) you should be able to go for about a second without processing any data
coming from the far end.
Likewise, if the longest you expect your thread to go without reading from the buffer is x seconds then you buffer size should be x * 2 MB + (expected size of buffer at such a time)
Hope this helps.
Upvotes: 5