Reputation: 2694
What is the harm in increasing max connections? Even with the current setup, with replication setup, our master server complains that too many clients already.
Is it safe to increase max connections to 400 in postgresql.conf, or is there something wrong with my servers as they are consuming too many connections. If we get more customers, how will we scale?
Upvotes: 3
Views: 3204
Reputation: 324771
Increasing max_connections
:
work_mem
used, and thus memory pressureAdditionally, if those connections are actively running queries, not idle, the greater contention for system resources like CPU and memory can lead to less overall throughput.
In general you should try to have fewer connections, and queue up work. A proxying connection pool like PgBouncer is good for this. Unfortunately PostgreSQL its self lacks support for connection pooling, so a 3rd party pool is required if the application framework lacks its own internal connection pooling.
Unless you're on a really big machine I'd aim for less than 100 connections actively running queries. Usually quite a bit less.
Upvotes: 3
Reputation: 2483
Increasing max connections directly impacts the RAM consumed by PostgreSQL. Remember that PostgreSQL is a per-process application, which means that 'each' connection opens up a new 'postgres' instance. More the connections, more the instances, more the RAM consumed (and this is not for serving Queries, this is just to serve an instance to serve a query).
The solution is to two fold:
In practicality, since point 2 (above) isn't always possible, point (1) is almost always done irrespective, and helps in a big way in such scenarios.
Upvotes: 4
Reputation: 5190
Well, if you have too many clients
error it obviously means that something is gathering all available connections. Based on description above it's hard to say what exactly wrong.
ps aux | grep postgres
in master & slave nodes to see actual IP adresses of your clients Upvotes: 2