Reputation: 364
I have a Service Fabric Setup and I have a microservice that registers on port 443 and uses https. I have a reverse proxy in my cluster setup. The reverse proxy is secured with a certificate.
I also use the same certificate when starting up my microservice:
new ServiceInstanceListener(serviceContext =>
new KestrelCommunicationListener(serviceContext, "EndpointHttps", (url, listener) =>
{
ServiceEventSource.Current.ServiceMessage(serviceContext, $"Starting Kestrel on {url}");
return new WebHostBuilder()
.UseKestrel(x =>
{
int port = serviceContext.CodePackageActivationContext.GetEndpoint("EndpointHttps").Port;
x.Listen(IPAddress.IPv6Any, port, listenOptions =>
{
listenOptions.UseHttps(transportCertificate);
listenOptions.NoDelay = true;
});
})
.ConfigureServices(
services => services
.AddSingleton<StatelessServiceContext>(serviceContext))
.UseContentRoot(Directory.GetCurrentDirectory())
.UseEnvironment(environment)
.UseStartup<Startup>()
.UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.None)
.UseUrls(url)
.UseSerilog(Logger.Serilog)
.Build();
}))
};
Everything seems to work fine and my site is secure in the browser and the api works. However my log is getting filled up with the following statements (these are just debug messages but they are filling up my log):
133649 Failed to authenticate HTTPS connection. Debug System.IO.IOException: Authentication failed because the remote party has closed the transport stream.
133643 Connection id ""0HLPCETMP8DKM"" started. key='SourceContext'>Microsoft.AspNetCore.Server.Kestrel
133642 Connection id ""0HLPCETMP8DKL"" received FIN. key='SourceContext'>Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets
133641 Connection id ""0HLPCETLEPLQ7"" stopped. key='SourceContext'>Microsoft.AspNetCore.Server.Kestrel
133640 Connection id ""0HLPCETLEPLQ7"" sending FIN. key='SourceContext'>Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets
The exception is:
System.IO.IOException: Authentication failed because the remote party has closed the transport stream.
at System.Net.Security.SslState.StartReadFrame(Byte[] buffer, Int32 readBytes, AsyncProtocolRequest asyncRequest)
at System.Net.Security.SslState.PartialFrameCallback(AsyncProtocolRequest asyncRequest)
--- End of stack trace from previous location where exception was thrown ---
at System.Net.Security.SslState.ThrowIfExceptional()
at System.Net.Security.SslState.InternalEndProcessAuthentication(LazyAsyncResult lazyResult)
at System.Net.Security.SslState.EndProcessAuthentication(IAsyncResult result)
at System.Net.Security.SslStream.EndAuthenticateAsServer(IAsyncResult asyncResult)
at System.Net.Security.SslStream.<>c.<AuthenticateAsServerAsync>b__51_1(IAsyncResult iar)
at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Server.Kestrel.Https.Internal.HttpsConnectionAdapter.InnerOnConnectionAsync(ConnectionAdapterContext context)
Does anyone have any idea why it would do this? Any other logs I can look at or suggestions on how to debug?
Thanks.
Upvotes: 2
Views: 1204
Reputation: 1652
This is very likely due to the fact that you're running Service Fabric with the standard template, which sets up a Azure Load Balancer with a health probe for the ports that you've set up to use. The standard probe is generally TCP and will probe the endpoint every 5 seconds. You can remove the probe if that's a viable option. I personally use a less aggressive probe interval.
Note that changing the load balancer rules generally takes a long time, anything from a few minutes to near half an hour.
Upvotes: 1