Henno
Henno

Reputation: 1498

Determining the problematic queries in PostgreSQL

Sometimes a PostgreSQ database I support is overloaded. I need to pinpoint the exact queries that cause the database to be overloaded. I have created a pgFounie output on the relevant time frame: http://bit.ly/f2aikx

In that report there is a row
Query peak: 4 queries/s at 2011-03-15 10:41:06, 2011-03-15 10:44:18, 2011-03-15 10:48:25

It is not clear whether the mentioned timestamps are for the start or end of the peak.

Upvotes: 2

Views: 803

Answers (1)

anon
anon

Reputation:

I'm not sure what the question is, but I'll take a stab at a few things:

The "Query peak" metric is referring to three separate seconds where you saw a peak throughput of 4 queries per second.

Here's how I would approach pinpointing your problematic queries:

  1. Define "overloaded" for this instance. That will help you determine what is actually causing the problem. Let's assume that overloaded is defined as "slow queries"
  2. Examine slow queries in the pgFouine output. It helpfully groups them in the "Queries that took the most time (N)" section. Looking in there you can also click on "Show Examples" to see a few queries that are giving you grief.
  3. Take a sample of a few of those queries and run an EXPLAIN ANALYZE on them to get actual execution plans.
  4. Look at the other plans running at the same time. These may be causing I/O contention
  5. Analyze the plans yourself or else use http://explain.depesz.com/ to get analysis of your execution plans. Watch for things like table spools.
  6. Tune queries or adjust PostgreSQL settings accordingly.
  7. Rinse and repeat

In the long run, I would change settings in pgFouine to only log queries that execute for over 100ms. You can do that using the log_min_duration_statement setting in the postgresql.conf file.

Upvotes: 1

Related Questions