Reputation: 1968
I have recently start implementing some load testing. I am using .NetCore, EF Core 2.2 and Postgres.
I am calling my test server (withHttpClient
client.PostAsync("/api/resource");
).
I found that few of my requests fails because of too many connections (The connection pool has been exhausted, either raise MaxPoolSize (currently 100) or Timeout (currently 15 seconds)
). So I implemented such approach:
services.AddDbContext<MyDbContext>(optionsAction: optionsBuilder =>
optionsBuilder.UseNpgsql(Config.ConnectionString,
optionsAction =>
{
optionsAction.EnableRetryOnFailure(5, TimeSpan.FromSeconds(10), null);
}));
it helped a bit, but still, when calling with e.g. 500 requestes I got errors.
I imagine, that it should work in a way, that e.g. if system is able to handle 100 requests at the same time (e.g. 100 connections to DB), the remaining requests (which uses MyDbContext
) should wait (rather than fail).
So my questions are:
EnableRetryOnFailure
?Upvotes: 1
Views: 1950
Reputation: 246473
There is a bug in your application that makes it not return the connections to the pool after the transaction is done (a so-called “connection leak”).
100 connections is way too big for a connection pool (unless the machine has 100 cores and local SSDs).
The idea is that when you want to perform a database transaction, you get a connection from the pool, do your work, commit the transaction and return the connection to the pool. Normally the time an OLTP transaction takes inside the database is so short that you can easily handle thousands of concurrent user sessions that way.
This all hinges on the assumption that your transactions are short. If they are not, you may get a problem wil concurrency, because locks are held until a transaction ends. Moreover, long transactions can keep autovacuum from doing its life-saving work.
Upvotes: 5