Eric
Eric

Reputation: 2283

socket buffer size: pros and cons of bigger vs smaller

I've never really worked with COM sockets before, and now have some code that is listening to a rather steady stream of data (500Kb/s - 2000Kb/s).

I've experimented with different sizes, but am not really sure what I'm conceptually doing.

byte[] m_SocketBuffer = new byte[4048];
//vs
byte[] m_SocketBuffer = new byte[8096]; 

The socket I'm using is System.Net.Sockets.Socket, and this is my constructor:

new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)

My questions are:

  1. Is there a general trade-off for large vs. small buffers?
  2. How does one go about sizing the buffer? What should you use as a gauge?

I'm retrieving the data like this:

string socketData = Encoding.ASCII.GetString(m_SocketBuffer, 0, iReceivedBytes)
while (sData.Length > 0)
{ //do stuff }
  1. Does my reading event happen when the buffer is full? Like whenever the socket buffer hits the threshold, that's when I can read from it?

Upvotes: 1

Views: 3391

Answers (1)

Peter Duniho
Peter Duniho

Reputation: 70671

Short version:

  • Optimal buffer size is dependent on a lot of factors, including the underlying network transport and how your own code handles the I/O.
  • 10's of K is probably about right for a high-volume server moving a lot of data. But you can use much smaller if you know that remote endpoints won't ever be sending you a lot of data all at once.
  • Buffers are pinned for the duration of the I/O operation, which can cause or exacerbate heap fragmentation. For a really high-volume server, it might make sense to allocate very large buffers (larger than 85,000 bytes) so that the buffer is allocated from the large-object heap (which either has no fragmentation issues, or is in a perpetual state of fragmentation, depending on how you look at it :) ), and then use just a portion of each large buffer for any given I/O operation.

Re: your specific questions:

  1. Is there a general trade-off for large vs. small buffers?

Probably the most obvious is the usual: a buffer larger than you will ever actually need is just wasting space.

Making buffers too small forces more I/O operations, possibly forcing more thread context switches (depending on how you are doing I/O), and for sure increasing the number of program statements that have to be executed.

There are other trade-offs of course, but to go into each and every one of them would be far too broad a discussion for this forum.

  1. How does one go about sizing the buffer? What should you use as a gauge?

I'd start with a size that seems "reasonable", and then experiment from there. Adjust the buffer size in various load testing scenarios, increasing and decreasing, to see what if any effect there is on performance.

  1. Does my reading event happen when the buffer is full? Like whenever the socket buffer hits the threshold, that's when I can read from it?

When you read from the socket, the network layer will put as much data into your buffer as it can. If there is more data available than will fit, it fills the buffer. If there is less data available than will fit, the operation will complete without filling the buffer (but always when at least one byte has been placed into the buffer…the only time a read operation completes with zero-length is when the connection is being shut down)

Upvotes: 4

Related Questions