Reputation: 3491
My setup:
postgresql.conf: max_connections = 100
pgbouncer config:
[databases]
markets_parser = host=localhost port=5432 dbname=markets_parser user=argon
datestyle=ISO connect_query='SELECT 1'
[pgbouncer]
logfile = /var/log/postgresql/pgbouncer.log
listen_addr = 127.0.0.1
listen_port = 6432
unix_socket_dir = /var/run/postgresql
auth_type = trust
auth_file = /etc/pgbouncer/userlist.txt
admin_users = postgres
stats_users = postgres
pool_mode = session
server_reset_query = DISCARD ALL
max_client_conn = 90
default_pool_size = 70
min_pool_size = 20
reserve_pool_size = 10
I use it from django application (not just site application, but also highload multithreading parsing mechanism) with a bunch of django management commands (which runs as a separate processes), and it works fine (though, almost always, not when I have a sudden splash of activity) when I run only one command in time, and raises a lot of 'not enough connections' errors when I run two commands at once.
What to do?
Upvotes: 0
Views: 1437
Reputation: 8125
You can safely increase max_client_conn
to a higher number. That is the total number of clients that can connect to pgbouncer
, not to the PostgreSQL
server. It can be as high as you want, for example 2000 or even higher.
That is indeed one of the main purposes of a connection pooler - to hold small amount of connections open to the database, while allowing high number of end client connections to the pooler itself.
The important thing is to set default_pool_size
not higher than the max_connections
setting in the postgresql.conf
file.
Upvotes: 2