Reputation: 18379
I recently migrated my Postgres database from Windows to CentOS 6.7. On Windows the database never used much CPU, but on Linux I see it using a constant ~30% CPU (using top). (4 core on machine)
Anyone know if this is normally, or why it would be doing this? The application seems to run fine, and as fast or faster than Windows.
Note, it is a big database, 100gb+ data, 1000+ databases.
I tried using Pgadmin to monitor the server status, but the server status hangs, and fails to run, error "the log_filename parameter must be equal"
Upvotes: 9
Views: 3374
Reputation: 3368
With 1000 databases I expect vacuum workers and stats collector to spend a lot of time checking about what needs maintenance.
I suggest you to do two things
autovacuum_naptime
parameter to reduce the frequency of checksstats_temp_directory
on a ramdiskYou probably also set a high max_connections
limit to allow your clients to use those high number of databases and this is another probable source of CPU load, due to the high number of 'slots' to be checked every time a backend has to synchronize with the others.
Upvotes: 6
Reputation: 57766
There could be multiple reasons for increasing server loads.
SELECT pid, datname, usename, query FROM pg_stat_activity;
Once you know what queries are running you can investigate further (EXPLAIN/EXPLAIN ANALYZE; check locks, etc.)
You may have lock contention issues, probably due to very high max_connections. Consider lowering max_connections and using a connection pooler if this is the case. But that can increase turn around time for clients connections.
Might be Windows System blocking connections and not allowing to use system. And now Linus allowing its connections to use CPU and perform faster. :P
Also worth read:
Upvotes: 0