Nick
Nick

Reputation: 1424

.NET Core service stalling under load due to HttpWebRequest

I had a ASP.NET (.NET Framework 4.8) web service running on a Windows Server, that made lots of outgoing HTTP requests using HttpWebRequest (synchronous). It was handling thousands of concurrent requests without any trouble.

Recently, I migrated the service/middleware to ASP.NET Core (runtime 3.1) running on Ubuntu Server, using the updated HttpWebRequest (synchronous).

Now this service is stalling under a load test with just a few hundred concurrent requests. System journal/logs indicate that health check (heartbeat) cannot reach the service after a few minutes. It starts out fine, but after a few minutes it slows down and eventually halts (no response but it doesn't crash dotnet though), and then starts working again after 5-10 minutes without any intervention, and repeats this same behavior every few minutes.

I'm not sure if this is due to port exhaustion or a deadlock. If I load test the service by skipping all HttpWebRequest calls, then it works fine, so I'm suspecting it has to do something with HttpWebRequest causing an issue under stress of traffic.

Looking at the .NET Core codebase, it seems like HttpWebRequest (synchronous) creates a new HttpClient for each request (client is not cached due to parameters in my case), and executes HttpClient synchronously like:

public override WebResponse GetResponse()
{
    :
    :
    return SendRequest(async: false).GetAwaiter().GetResult();
    :
    :
}

private async Task<WebResponse> SendRequest(bool async)
{
    :
    :
    _sendRequestTask = async ?
        client.SendAsync(...) :
        Task.FromResult(client.Send(...));

    HttpResponseMessage responseMessage = await _sendRequestTask.ConfigureAwait(false);
    :
    :
}

Official suggestion from Microsoft is to use IHttpClientFactory or SocketsHttpHandler for better performance. I can make our service use a singleton SocketsHttpHandler and new HttpClient's per outgoing request (with the shared handler) to reuse & close sockets better, but my main concern is (below):

The service is based on synchronous code, so I'll have to use asynchronous HttpClient synchronously, probably using the same method.GetAwaiter().GetResult() technique as official .NET Core code above. While a singleton SocketsHttpHandler may help avoid port exhaustion, could concurrent synchronous execution still result in the stalling problem due to deadlocks like the native HttpWebRequest?

Also, is there an approach (another synchronous HTTP client for .NET Core, setting 'Connection: close' header etc.) to smoothly making lots of concurrent HTTP requests synchronously without port exhaustion or deadlocks, just like it worked smoothly earlier with HttpWebRequest in .NET Framework 4.8?

Just to clarify, all WebRequest related objects are closed/disposed properly in the code, ServicePointManager.DefaultConnectionLimit is set to int.MaxValue, nginx (proxy to dotnet) has been tuned, sysctl has been tuned as well.

Upvotes: 0

Views: 1738

Answers (1)

Stephen Cleary
Stephen Cleary

Reputation: 456657

I'm not sure if this is due to port exhaustion or a deadlock.

Sounds more like thread pool exhaustion to me.

The service is based on synchronous code, so I'll have to use asynchronous HttpClient synchronously

Why?

The best solution to thread pool exhaustion is to rewrite blocking code to be asynchronous. There were places in ASP.NET pre-Core that required synchronous code (e.g., MVC action filters and child actions), but ASP.NET Core is fully asynchronous, including the middleware pipeline.

If you absolutely cannot make the code properly asynchronous for some reason, the only other workaround is to increase the minimum number of threads in the thread pool on startup.

Upvotes: 1

Related Questions