Reputation: 497
I have a TCPListener server listens at dedicated port and there is one and only client connects with server. Client establishes connection with server and keeps continuing sending heartbeat messages with 60 seconds interval to keep it alive. Additionally it sends different types of request messages (first 4 bytes of each message reserved for size of message) which Server receives and passes socket to a Task (asynchronously) which performs a lengthy process including DB I/Os and returns back response by using same socket. It has been working fine but recently client has started sending 1000 requests asynchronously. Server responses back initial requests with a good pace but after entertaining around 100 requests response becomes slow. I solicit your expert opinion to make the response robust.
class ServerA
{
.......
.......
private void StartServer()
{
......
socket = tcpListener.AcceptSocket();
Process(socket);
.......
}
private void Process(Socket clientSocket)
{
while(true)
{
byte[] msg = new byte[1024 * 1024];
int totalMsgLength = clientSocket.Receive(msg);
ClientRequest req = new ClientRequest();
byte[] request = new byte[totalMsgLength];
Buffer.BlockCopy(msg, 0, request, 0, totalMsgLength);
req.HandleRequest(clientSocket, request);
}
}
}
class ClientRequest
{
byte[] request;
Socket clientSocket;
public void HandleRequest(Socket clientSocket,byte[] request)
{
try
{
this.request = request;
this.clientSocket = clientSocket;
Task.Run(() =>
{
Entertain(null);
});
}
catch (Exception ex)
{
.....
}
}
private void Entertain(object callback)
{
request = admin.PerformAction(ref request); //Long process
clientSocket.Send(request);
}
}
Upvotes: 0
Views: 784
Reputation: 1062502
I would imagine, although we can't be sure, that a lot of this is due to the 1MiB buffer followed by a right-sized buffer that you allocate every read; scale that up to 1000 per second, and yes: that would hurt a lot. Effective buffer management is hard, and the simple approaches are only suitable for low-volume work.
I'm honestly more concerned with the blind receives, though; meaning: you mentioned TCP, and TCP is a stream protocol, not a packet protocol; there is no correlation whatsoever between how you receive buffers, and what was sent - in terms of where each message ends, especially for high volume / frequency; you need a "framing" protocol. Without that, what you send to HandleRequest
could be incomplete messages, multiple messages, etc.
You should also have a break;
in your loop if totalMsgLength <= 0
.
The "pipelines" API would be a very effective solution to both problems; unfortunately, the "pipelines" API is only fully implemented for servers, but: clients can use Pipelines.Sockets.Unofficial (nuget) to get the same features at clients (as the name suggests: it is not an official package). Since you're actually implementing a server here: Kestrel allows you to implement a very efficient arbitrary TCP server with the full pipelines API, simply by using UseConnectionHandler<T>
for your TCP handler T
.
Either way, the pipelines API provides:
You can read more about the "pipelines" API in a 3.5-part blog series of mine here (it is way too big a topic to cover in a post here).
If rewriting it to use pipelines is too much, you can probably achieve a lot simply by moving the 1MiB buffer to outside the loop so it only gets allocated once. That still leaves the inner allocations; that could use the array-pool (ArrayPool<byte>.Shared
), which gives oversized arrays, but you could pass a ReadOnlyMemory<byte>
or ArraySegment<byte>
to the HandleRequest
method to fix that.
That still leaves the framing problem, however. This is a major problem, and needs attention.
Upvotes: 1