Pratik Bothra
Pratik Bothra

Reputation: 2694

Increasing Max Connections in Postgresql - Consequences?

What is the harm in increasing max connections? Even with the current setup, with replication setup, our master server complains that too many clients already.

Is it safe to increase max connections to 400 in postgresql.conf, or is there something wrong with my servers as they are consuming too many connections. If we get more customers, how will we scale?

Our setup

Upvotes: 3

Views: 3204

Answers (3)

Craig Ringer
Craig Ringer

Reputation: 324771

Increasing max_connections:

  • Increases per-instance memory use, which is pretty much waste, leaving you less room for disk cache
  • Increases total work_mem used, and thus memory pressure
  • Makes some inter-process synchronization processes within the PostgreSQL server take longer, particularly aspects of lock management.

Additionally, if those connections are actively running queries, not idle, the greater contention for system resources like CPU and memory can lead to less overall throughput.

In general you should try to have fewer connections, and queue up work. A proxying connection pool like PgBouncer is good for this. Unfortunately PostgreSQL its self lacks support for connection pooling, so a 3rd party pool is required if the application framework lacks its own internal connection pooling.

Unless you're on a really big machine I'd aim for less than 100 connections actively running queries. Usually quite a bit less.

Upvotes: 3

Robins Tharakan
Robins Tharakan

Reputation: 2483

Increasing max connections directly impacts the RAM consumed by PostgreSQL. Remember that PostgreSQL is a per-process application, which means that 'each' connection opens up a new 'postgres' instance. More the connections, more the instances, more the RAM consumed (and this is not for serving Queries, this is just to serve an instance to serve a query).

The solution is to two fold:

  1. Use something like PgBouncer / PgPool to pool connections before they hit Postgres.
  2. Tighten the client applications and see that they correctly release connections when work is done.

In practicality, since point 2 (above) isn't always possible, point (1) is almost always done irrespective, and helps in a big way in such scenarios.

Upvotes: 4

Dzmitry Savinkou
Dzmitry Savinkou

Reputation: 5190

Well, if you have too many clients error it obviously means that something is gathering all available connections. Based on description above it's hard to say what exactly wrong.

  • make sure your app is using connection pool and does NOT create it each time you have DML operation
  • make sure that you have sufficient number of resources (you have not indicated server CPU/RAM) for all current connections. Consider this and this when creating particular pool.
  • try to run ps aux | grep postgres in master & slave nodes to see actual IP adresses of your clients

Upvotes: 2

Related Questions