Daniel Gabriel
Daniel Gabriel

Reputation: 21

Trying to resolve scalability issues by gating simultaneous requests

I have a legacy application that needs to deal with scalability issues. This is a WCF service that listens to requests received from a back end system and does some computations/processing of data based on those requests. The computations are not cpu intensive although in some cases it calls some third party library APIs (which itself calls SOAP based web-services). There aren't any database calls involved. Here is how the service is been set up.

[ServiceContract]
public interface IMyService
{
    [OperationContract]
    ProcessingResult ProcessData(int dataId, string dataDescription);
}


[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple, IncludeExceptionDetailInFaults = true)]
public class MyService : IMyService
{

    ProcessingResult ProcessData(int dataId, string dataDescription)
    {
         // Some data processing using third party library APIs and return ProcessingResult instance.

    }
}

public class ProcessingResult
{
    public int Code;
    public string Message;
}

The issue here is that under load conditions, there are so many calls of ProcessData method that overloads the system. In these scenarios,I have seen hundreds of threads running at a time. They eventually do complete the task but system slows down drastically. In order to deal with it, I am considering to add some sort of gating mechanism. Something like delegating the work to another class that queue the processing data calls to a ThreadPool with threshold on maximum number of requests process simultaneously. The problem I see with this approach is that ProcessData still need to wait for the work queued in threadpool to finish before returning ProcessingResult instance.

Googling seem to suggest that async-await could be a nice pattern, however, I am little restricted as this application isn't using latest and greatest .net versions and it could be a big ask to migrate to newer .NET version at this point. Any suggestions on how can I use ThreadPool.QueueUserWorkItem mechanims in this class with ability to wait on a particular items to finish?

Upvotes: 2

Views: 64

Answers (2)

hatcyl
hatcyl

Reputation: 2352

If it really is just to many threads waiting for IO, I can think of two solutions:

async / await WILL help. It will keep the number of threads down. You should be able to get the libraries: NuGet async / await

or

You can simply get another server, a load balancer, and have your requests split between the two servers.

Upvotes: 0

usr
usr

Reputation: 171226

If you need to handle hundreds of simultaneous requests then non-blocking IO is important. It removes the need for so many threads. In .NET 3.5 there are no convenient options to do this. If you can make it to 4.0 the story becomes 10x better.

What I would do in 4.0+ is:

static SemaphoreSlim sem = new SS(100); //max 100 concurrent requests
async Task<ProcessingResult> ProcessData(...) {
 await sem.WaitOne();
 await ProcessRequestAsync(); //or, do it synchronously here if convenient
 sem.Release();
}

On pre-4.0 you need to find a non-blocking way to do the throttling that the semaphore provides. That will likely involve using IAsyncResult.

Or, you find a way in WCF to throttle the max number of concurrent requests. That would solve the problem, too.

Even if you offload to background threads this helps you little: You need to expose an IAsyncResult to WCF so that the WCF thread can be unblocked.

there are so many calls of ProcessData method that overloads the system. In these scenarios,I have seen hundreds of threads running at a time.

What is causing a problem here? Too many threads or too many concurrent operations overloading the backend? If it is the former, then a quick and totally valid fix is to increase the thread-pool limits. 1000 threads do not cause issues in my testing. If the backend is overloaded, unlimit the thread-pool as well and use a synchronous semaphore to limit the number of concurrent backend calls.

So here are some alternatives to pick from. The best one would be async IO and waiting but that is harder to pull off.

Upvotes: 1

Related Questions