Reputation: 1728
We have an issue whereby during load testing if we fire calls rapidly at one of our services we were getting the error
"System.ServiceModel.ServerTooBusyException: The request to create a reliable session has been refused by the RM Destination. Server 'net.tcp://localhost:10511/ParameterMonitorService' is too busy to process this request. Try again later. The channel could not be opened."
We increased the value of maxPendingChannels from its default of 4 to 128 and then beyond, and the error has disappeared, now however, rather than throwing the exception the service will just stop processing messages under load and then begin again several minutes later.
It does not seem to drop anything, it just hangs for a while. The more we pound the service the longer this recovery seems to take.
The service is configured as Per-Call with ConcurrencyMode Multiple. Other behavior settings are:
<serviceThrottling maxConcurrentCalls="100" maxConcurrentSessions="100" maxConcurrentInstances="100"/>
<customBinding>
<binding name="Services_Custom_Binding" openTimeout="00:00:20" sendTimeout="00:01:00">
<reliableSession ordered="true" inactivityTimeout="00:10:00" maxPendingChannels="128" flowControlEnabled="true" />
<binaryMessageEncoding>
<readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384"
maxBytesPerRead="4096" maxNameTableCharCount="16384" />
</binaryMessageEncoding>
<tcpTransport maxPendingConnections="100" listenBacklog="100" />
</binding>
</customBinding>
We are kind of stuck. Any help appreciated!
Upvotes: 3
Views: 4677
Reputation: 6368
By default the threadpool creates 8 threads and adds only two threads per second thereafter. When you fire up a raft of workers at the same time, WCF balks because the threads don't start quickly enough.
This is the solution that works nicely for me, call AdjustThreads whenever you're going to fire up lots of threads:
Imports NLog
Public Module AdjustThreads_
Private _Logger As Logger = LogManager.GetCurrentClassLogger
Private _MaxWorkers As Integer = 16
Private _MaxCompletions As Integer = 16
Public Sub AdjustThreads()
Dim minworkerthreads As Integer = 0
Dim maxworkerthreads As Integer = 0
Dim mincompletionthreads As Integer = 0
Dim maxcompletionthreads As Integer = 0
Dim activeworkerthreads As Integer = 0
Dim activecompletionthreads As Integer = 0
Threading.ThreadPool.GetMinThreads(minworkerthreads, mincompletionthreads)
Threading.ThreadPool.GetMaxThreads(maxworkerthreads, maxcompletionthreads)
Threading.ThreadPool.GetAvailableThreads(activeworkerthreads, activecompletionthreads)
Dim workers As Integer = maxworkerthreads - activeworkerthreads
Dim completions As Integer = maxcompletionthreads - activecompletionthreads
If workers > _MaxWorkers Then
_MaxWorkers = _MaxWorkers
End If
If completions > _MaxCompletions Then
_MaxCompletions = completions
End If
' If current is (initially) 8, new threads only start twice a second.
' So, kick off a minimum of 16 and always increase by 50%
Dim needworkers As Integer = _MaxWorkers * 3 \ 2
Dim needcompletions As Integer = _MaxCompletions * 3 \ 2
If needworkers > minworkerthreads OrElse
needcompletions > mincompletionthreads Then
_Logger.Info("Threadpool increasing workers to {0}, completions to {1}",
needworkers, needcompletions)
Threading.ThreadPool.SetMinThreads(needworkers, needcompletions)
End If
End Sub
End Module
(damn editor keeps making 'End Module' fall out of the code, if someone could fix it?)
Upvotes: 0
Reputation: 12135
This is a classic performance tuning story. By reconfiguring the throttle on reliable sessions you have removed what used to be the bottleneck in the system, and have moved the bottleneck to somewhere else in your system.
You really can't expect people to pluck a diagnosis of where the bottleneck now lies out of thin air, without any details of how your service is hosted, on what hardware, what it is doing, or how it goes about doing it. You need to instrument your system as comprehensively as you can, using Windows Performance Monitor counters, and interpret these to get an idea of where resource contention is now happening in the system.
My first guess would be that the increased concurrency after removing the session throttle is causing contention for managed thread pool threads, but this is only a guess - really you want to base diagnosis on evidence, not guesswork.
Upvotes: 2