Maciej Pszczolinski
Maciej Pszczolinski

Reputation: 1968

How DbContext and connections to DB should be implemented to handle load testing?

I have recently start implementing some load testing. I am using .NetCore, EF Core 2.2 and Postgres.

I am calling my test server (withHttpClient client.PostAsync("/api/resource");).

I found that few of my requests fails because of too many connections (The connection pool has been exhausted, either raise MaxPoolSize (currently 100) or Timeout (currently 15 seconds)). So I implemented such approach:

services.AddDbContext<MyDbContext>(optionsAction: optionsBuilder =>
    optionsBuilder.UseNpgsql(Config.ConnectionString,
    optionsAction =>
    {
        optionsAction.EnableRetryOnFailure(5, TimeSpan.FromSeconds(10), null);
    }));

it helped a bit, but still, when calling with e.g. 500 requestes I got errors.

I imagine, that it should work in a way, that e.g. if system is able to handle 100 requests at the same time (e.g. 100 connections to DB), the remaining requests (which uses MyDbContext) should wait (rather than fail).

So my questions are:

  1. Is it normal that with high load, the requests will fail because of maximum of possible concurrent connections?
  2. If above is true - the normal approach is to use EnableRetryOnFailure?
  3. Is it possible to make other requests to wait in non blocking way (e.g. max 20 seconds) until there are some connections available?
  4. Or maybe there is other configuration option (in EF core, or in MVC) to make other requests to wait for available connection (from connection pool) instead of failing with exception?

Upvotes: 1

Views: 1950

Answers (1)

Laurenz Albe
Laurenz Albe

Reputation: 246473

There is a bug in your application that makes it not return the connections to the pool after the transaction is done (a so-called “connection leak”).

100 connections is way too big for a connection pool (unless the machine has 100 cores and local SSDs).

The idea is that when you want to perform a database transaction, you get a connection from the pool, do your work, commit the transaction and return the connection to the pool. Normally the time an OLTP transaction takes inside the database is so short that you can easily handle thousands of concurrent user sessions that way.

This all hinges on the assumption that your transactions are short. If they are not, you may get a problem wil concurrency, because locks are held until a transaction ends. Moreover, long transactions can keep autovacuum from doing its life-saving work.

Upvotes: 5

Related Questions