Reputation: 41
I am writing a simple Tcp communication programs using TcpListener and TcpClient , .Net 4.7.1
I designed my own protocol to be: For each "data unit", the first 4 bytes is an int, indicating the length of data body. Once a complete "data unit" is received, the data body (as a byte[]) is passed to upper level.
My read and write functions are:
public static byte[] ReadBytes(Stream SocketStream)
{
int numBytesToRead = 4, numBytesRead = 0, n;
byte[] Length = new byte[4];
do
{
n = SocketStream.Read(Length, numBytesRead, numBytesToRead);
numBytesRead += n;
numBytesToRead -= n;
} while (numBytesToRead > 0 && n != 0);
if (n == 0) return null; //network error
if (!BitConverter.IsLittleEndian) Array.Reverse(Length);
numBytesToRead = BitConverter.ToInt32(Length, 0); //get the data body length
numBytesRead = 0;
byte[] Data = new byte[numBytesToRead];
do
{
n = SocketStream.Read(Data, numBytesRead, numBytesToRead);
numBytesRead += n;
numBytesToRead -= n;
} while (numBytesToRead > 0 && n != 0);
if (n == 0) return null; //network error
return Data;
}
public static void SendBytes(Stream SocketStream, byte[] Data)
{
byte[] Length = BitConverter.GetBytes(Data.Length);
if (!BitConverter.IsLittleEndian) Array.Reverse(Length);
SocketStream.Write(Length, 0, Length.Length);
SocketStream.Write(Data, 0, Data.Length);
SocketStream.Flush();
}
And I made a simple echo program to test the RTT:
private void EchoServer()
{
var Listener = new TcpListener(System.Net.IPAddress.Any, 23456);
Listener.Start();
var ClientSocket = Listener.AcceptTcpClient();
var SW = new System.Diagnostics.Stopwatch();
var S = ClientSocket.GetStream();
var Data = new byte[1];
Data[0] = 0x01;
Thread.Sleep(2000);
SW.Restart();
SendBytes(S, Data); //send the PING signal
ReadBytes(S); //this method blocks until signal received from client
//System.Diagnostics.Debug.WriteLine("Ping: " + SW.ElapsedMilliseconds);
Text = "Ping: " + SW.ElapsedMilliseconds;
SW.Stop();
}
private void EchoClient()
{
var ClientSocket = new TcpClient();
ClientSocket.Connect("serverIP.com", 23456);
var S = ClientSocket.GetStream();
var R = ReadBytes(S); //wait for PING signal from server
SendBytes(S, R); //response immediately
}
In "ReadBytes", I have to read 4 bytes from the NetworkStream first in order to know how many bytes I have to read next. So in total I have to call NetworkStream.Read twice, as shown in above codes.
The problem is: I discovered that calling it twice resulted in around 110ms RTT. While calling it once(regardless of data completeness) is only around 2~10ms(put a "return Length;" immediately after the first do-while loop, or comment out the first do-while loop and hard-code the data length, or read as much as it can in one call to "Read").
If I go for the "read as much as it can in one call" method, it may result in "over-read" of data and I have to write more lines to handle the over-read data to assemble next "data unit" correctly.
Anyone knows what's the cause of the almost 50 times overhead?
As I read from Micrisoft
Microsoft improved the performance of all streams in the .NET Framework by including a built-in buffer.
so even if I call .Read twice, it's only reading from memory, am I correct?
(if you want to test the codes, please do it on a real server and connect from your home PC maybe, do it in localhost always returns 0ms)
Upvotes: 0
Views: 656
Reputation: 41
Thanks for Jeroen Mostert's comment reminder. I added:
ClientSocket.NoDelay = true;
ClientSocket.Client.NoDelay = true;
to both the client and server side and the annoying delay is gone, the RTT is back to expected.
Further tests showed that both sides(client and server) contributed around 50ms delay without modifying "NoDelay" option, so in total around 100ms RTT "overhead".
Upvotes: 2