Reputation: 133
I am having issues across several environments (ie different Azure Redis instances) similar to this post: ServiceStack.Redis: Unable to Connect: sPort:
But of course we cannot move or change redis servers since we are dependent on Azure Redis. If it is a latency issue we might be screwed...
We were using an older version of SS (4.0.42.0) and have since updated to latest (4.0.56.0), and see the same intermittent problems.
Here is some background: - The issue only comes out after at least a 2K requests (sometimes more or less). Yes we are using the latest SS license. - It is very intermittent, most requests are successful, but the ones that fail usually fail in small bunches (1-5 or so) then the issue disappears for a while - I have tried RedisPoolManager, PooledRedisClientManager with same results. - I have done a client stats report for every request and made sure the pool contains ample clients, none are in error, etc. Rarely do I see more than 2-3 clients in use at a time out of 40. -
Different Exceptions:
-IOException
with message Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host
and a stacktrace that includes a mention of RedisClient. Here is the full error dump:
"exception": {
"message": "Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.",
"source": "System",
"targetSite": "Int32 Read(Byte[], Int32, Int32)",
"stackTrace": " at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)\r\n at System.Net.FixedSizeReader.ReadPacket(Byte[] buffer, Int32 offset, Int32 count)\r\n at System.Net.Security._SslStream.StartFrameHeader(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)\r\n at System.Net.Security._SslStream.StartReading(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)\r\n at System.Net.Security._SslStream.ProcessRead(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)\r\n at System.Net.Security.SslStream.Read(Byte[] buffer, Int32 offset, Int32 count)\r\n at System.IO.BufferedStream.ReadByte()\r\n at ServiceStack.Redis.RedisNativeClient.ReadLine()\r\n at ServiceStack.Redis.RedisNativeClient.ReadData()\r\n at ServiceStack.Redis.RedisClient.<>c__DisplayClass1c
1.b__1b(RedisClient r)\r\n at ServiceStack.Redis.RedisClient.Exec[T](Func2 action)\r\n at ServiceStack.Redis.RedisClientManagerCacheClient.Get[T](String key)\r\n at API.ServiceInterface.RequestExtensions.GetUserSession(IRequest req, Boolean createIfNotExists) in F:\\src\\CCCAPI CD (DevLab)\\ServiceInterface\\Extensions\\RequestExtensions.cs:line 26\r\n at API.WebHost.AuthImpl.HandleBlacklistedUserSessions(IRequest req, IResponse httpResponse) in F:\\src\\CCCAPI CD (DevLab)\\WebHost\\Authentication\\AuthImpl.cs:line 30\r\n at ServiceStack.ServiceStackHost.ApplyPreRequestFilters(IRequest httpReq, IResponse httpRes)\r\n at ServiceStack.Host.RestHandler.ProcessRequestAsync(IRequest httpReq, IResponse httpRes, String operationName)",
"type": "IOException",
"innerException": {
"message": "An existing connection was forcibly closed by the remote host",
"source": "System",
"targetSite": "Int32 Read(Byte[], Int32, Int32)",
"stackTrace": " at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)",
"type": "SocketException"
}
}
-Another exception we see is exception Type ServiceStack.Redis
with message Unable to Connect: sPort: 50447
(the interesting thing here is that the port changes, and is never the real Azure Redis SSL port that should be used, it seems like the pool manager might not pass the correct one to this client?). Here is the full dump:
"exception": {
"message": "Unable to Connect: sPort: 50447",
"source": "ServiceStack.Redis",
"targetSite": "ServiceStack.Redis.RedisException CreateConnectionError()",
"stackTrace": " at ServiceStack.Redis.RedisNativeClient.CreateConnectionError()\r\n at ServiceStack.Redis.RedisNativeClient.SendExpectData(Byte[][] cmdWithBinaryArgs)\r\n at ServiceStack.Redis.RedisClient.<>c__DisplayClass1c
1.b__1b(RedisClient r)\r\n at ServiceStack.Redis.RedisClient.Exec[T](Func2 action)\r\n at ServiceStack.Redis.RedisClientManagerCacheClient.Get[T](String key)\r\n at API.ServiceInterface.RequestExtensions.GetUserSession(IRequest req, Boolean createIfNotExists) in F:\\src\\CCCAPI CD (DevLab)\\ServiceInterface\\Extensions\\RequestExtensions.cs:line 26\r\n at API.WebHost.AuthImpl.HandleBlacklistedUserSessions(IRequest req, IResponse httpResponse) in F:\\src\\CCCAPI CD (DevLab)\\WebHost\\Authentication\\AuthImpl.cs:line 30\r\n at ServiceStack.ServiceStackHost.ApplyPreRequestFilters(IRequest httpReq, IResponse httpRes)\r\n at ServiceStack.Host.RestHandler.ProcessRequestAsync(IRequest httpReq, IResponse httpRes, String operationName)",
"type": "RedisException",
"innerException": {
"message": "An existing connection was forcibly closed by the remote host",
"source": "System",
"targetSite": "Void Write(Byte[], Int32, Int32)",
"stackTrace": " at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)",
"type": "SocketException"
}
Im struggling with this one... any help would be appreciated.
Upvotes: 1
Views: 1860
Reputation: 143399
An existing connection was forcibly closed by the remote host
This is a general TCP Network error indicating your connection was killed by the remote redis instance or potentially faulty network hardware, there's nothing that can prevent it from happening on the client but the effects of this should be mitigated with ServiceStack.Redis Automatic Retries feature.
Unable to Connect: sPort: 50447
The sPort (source) refers to the clientPort, i.e. the TCP port chosen randomly on the client for establishing the TCP connection, it doesn't refer to the server's (destination) port which is specified in the connection string.
The error is an indication that the Redis Client is trying to establish a new TCP connection but has been refused. There's nothing the client can do but keep retrying.
Given the issue appears more frequently after some load it may be a result of the server being oversaturated in which case you can try increasing the size of the Azure Redis Cache you're using.
I've been noticing these intermittent issues seem to happen a lot more on Azure than anywhere else (not clear if it's due to popularity or unreliability), redis is normally rock solid in its naturally environment, i.e. running on Linux and accessed from the same subnet. Another solution you can try is to running a redis server on a Linux VM in the same datacenter from where it's accessed - this may bypass any throttling or other limits the managed Azure Redis Service may be adding.
Upvotes: 2