Chris
Chris

Reputation: 627

c#: Deadlock while synchronizing network communication

I want to develop a network library in c# which fulfills the following requirements:

Basically I have two threads:

So basically the code is like the following (shortend very much):

BlockingCollection _receivingQueue;
ConcurrentDictionary <MessageID, ResponseHandlerDelegate> _responseHandlers;
Dictionary <MessageID, Message> _responses;

void ListeningThread()
{
     while(true)
     {
          Message = ReadNetworkStream();
          _receivingQueue.Add(Message);
     }
}

void ProcessingThread()
{
      if(Message == Request)
            Dispatch(message);
      if(Message == ResponseForARequest)
      {
            _responses.Add(Message.ID, Message);
            _responseHandlers[Message.ID](Message);
      }

}

This seems to work good.

However as this was intended as an "easy-to-use" library, I wanted to develop the above mentioned function: Response = Connection.Send(Message)

Basically I thought I can do this using a ManuelResetEvent, for notification, when a new response arrived:

Dictionary <Guid, ManualResetEvents> _resetters;

Message Send(Message)
{
     ManualResetEvent mre = new ManuelResetEvent(false);
     _resetters.Add(Message.ID, mre);

     Connection.Send(Message);

     WaitFor(mre);

     return _responses[Message.ID];
}

The MRE is set from the registered ResponseHandlerDelegate

ResponseArrived(Message)
{
     _responses.Add(Message.ID, Message);
     _resetters[Message.ID].Set();
}

This works as long as there is no concurrent bidirectional communication. As both - the server and client side - use the same code, they are both waiting for the MRE to be set...

Please note that everything above is some kind of pseudo-code as the real classes are too long. Please let me know if you need specific code parts.

So what is my question?

I think that what I produced is a mess. Is there an easy way to fix this? Or could someone please provide with an idea how I can implement this differently?

I don't want to use WCF, to have better control of the lower layers and - mainly - because I want to develop it by myself :-)

Thanks

Upvotes: 1

Views: 117

Answers (1)

Haney
Haney

Reputation: 34832

In order to prevent blocking or deadlocks while waiting for a response on the synchronous send, you'd need a send thread PER request. Otherwise Send B can wait for A's response and A can wait for B's response, and neither will ever do anything (deadlocked) because the WaitFor is a synchronous lock that prevents the thread which called your Send method from actually receiving anything.

Think of it like a state machine, with machines A and B running your library:

A --> send to B
B --> send to A
A --> wait for response
B --> wait for response
A --> receive message from B, but still waiting for B's response
B --> receive message from A, but still waiting for A's response

The result:

A --> has message from B but can't stop waiting to process it
B --> has message from A but can't stop waiting to process it

In short: you need more than 2 threads. You need N threads where N is the number of requests you want to do concurrently, even if all of these threads share one connection. Using a built-in or self-made thread pool would help you re-use threads and lower garbage collection costs.

It's okay to spin up a thread per request... In fact, it's encouraged. It's impossible to handle multiple requests adequately with a single thread when that single thread also waits for responses. If you weren't waiting for responses, however, this would be a different story.

Upvotes: 1

Related Questions