Reputation: 31
MySQL has a performance_schema database that allows one to capture SQL statement execution time data in a table (e.g. performance_schema.events_statements_history_long ; useful MySQL link).
I was wondering if a similar set of tools existed in PostgreSQL? (DB admin is not being very helpful)
Upvotes: 1
Views: 1247
Reputation: 22893
@Mingyu's answer is accurate, but not perhaps not quite what you're after depending on your use-case. You can of course log statement execution times and there are various log analysers to provide some detailed figures.
You can also log the execution plan (explain) for long-running queries using the auto-explain module.
However, for a quick overview of which queries are taking what amount of time and why, check out the pg_stat_statements module. Lets you count (constants removed) queries, how much time they are taking and how much disk I/O required etc.
Upvotes: 1
Reputation: 33359
By default, postgres does not log SQL statement. You could change the setting if you are superuser.
log_statement (enum): Controls which SQL statements are logged. Valid values are none (off), ddl, mod, and all (all statements). ddl logs all data definition statements, such as CREATE, ALTER, and DROP statements. mod logs all ddl statements, plus data-modifying statements such as INSERT, UPDATE, DELETE, TRUNCATE, and COPY FROM. PREPARE, EXECUTE, and EXPLAIN ANALYZE statements are also logged if their contained command is of an appropriate type. For clients using extended query protocol, logging occurs when an Execute message is received, and values of the Bind parameters are included (with any embedded single-quote marks doubled).
http://www.postgresql.org/docs/current/static/runtime-config-logging.html#GUC-LOG-STATEMENT
Upvotes: 0