Reputation: 64789
Is there a tool or method to analyze Postgres, and determine what missing indexes should be created, and which unused indexes should be removed? I have a little experience doing this with the "profiler" tool for SQLServer, but I'm not aware of a similar tool included with Postgres.
Upvotes: 127
Views: 88087
Reputation: 27969
I like this to find missing indexes:
SELECT
relname AS TableName,
to_char(seq_scan, '999,999,999,999') AS TotalSeqScan,
to_char(idx_scan, '999,999,999,999') AS TotalIndexScan,
to_char(n_live_tup, '999,999,999,999') AS TableRows,
pg_size_pretty(pg_relation_size(relname :: regclass)) AS TableSize
FROM pg_stat_all_tables
WHERE schemaname = 'public'
AND 50 * seq_scan > idx_scan -- more than 2%
AND n_live_tup > 10000
AND pg_relation_size(relname :: regclass) > 5000000
ORDER BY relname ASC;
This checks if there are more sequence scans than index scans. If the table is small, it gets ignored, since Postgres seems to prefer sequence scans for them.
Above query does reveal missing indexes.
The next step would be to detect missing combined indexes. I guess this is not easy, but doable. Maybe analyzing the slow queries ... I heard pg_stat_statements could help...
Upvotes: 212
Reputation: 2314
It can be found by using following query in postgres console
use db_name
select * from pg_stat_user_indexes;
select * from pg_statio_user_indexes;
For More Details https://www.postgresql.org/docs/current/monitoring-stats.html
Upvotes: 18
Reputation: 5949
CREATE EXTENSION pgstattuple;
CREATE TABLE test(t INT);
INSERT INTO test VALUES(generate_series(1, 100000));
SELECT * FROM pgstatindex('test_idx');
version | 2
tree_level | 2
index_size | 105332736
root_block_no | 412
internal_pages | 40
leaf_pages | 12804
empty_pages | 0
deleted_pages | 13
avg_leaf_density | 9.84
leaf_fragmentation | 21.42
Upvotes: 0
Reputation: 7713
You can use below query to find Index usage and Index size:
Reference is taken from this blog.
SELECT
pt.tablename AS TableName
,t.indexname AS IndexName
,to_char(pc.reltuples, '999,999,999,999') AS TotalRows
,pg_size_pretty(pg_relation_size(quote_ident(pt.tablename)::text)) AS TableSize
,pg_size_pretty(pg_relation_size(quote_ident(t.indexrelname)::text)) AS IndexSize
,to_char(t.idx_scan, '999,999,999,999') AS TotalNumberOfScan
,to_char(t.idx_tup_read, '999,999,999,999') AS TotalTupleRead
,to_char(t.idx_tup_fetch, '999,999,999,999') AS TotalTupleFetched
FROM pg_tables AS pt
LEFT OUTER JOIN pg_class AS pc
ON pt.tablename=pc.relname
LEFT OUTER JOIN
(
SELECT
pc.relname AS TableName
,pc2.relname AS IndexName
,psai.idx_scan
,psai.idx_tup_read
,psai.idx_tup_fetch
,psai.indexrelname
FROM pg_index AS pi
JOIN pg_class AS pc
ON pc.oid = pi.indrelid
JOIN pg_class AS pc2
ON pc2.oid = pi.indexrelid
JOIN pg_stat_all_indexes AS psai
ON pi.indexrelid = psai.indexrelid
)AS T
ON pt.tablename = T.TableName
WHERE pt.schemaname='public'
ORDER BY 1;
Upvotes: 16
Reputation: 5314
Another new and interesting tool for analyzing PostgreSQL is PgHero. It is more focused on tuning the database and makes numerous analysis and suggestions.
Upvotes: 19
Reputation: 5314
PoWA seems like an interesting tool for PostgreSQL 9.4+. It collects statistics, visualizes them, and suggests indexes. It uses the pg_stat_statements
extension.
PoWA is PostgreSQL Workload Analyzer that gathers performance stats and provides real-time charts and graphs to help monitor and tune your PostgreSQL servers. It is similar to Oracle AWR or SQL Server MDW.
Upvotes: 2
Reputation: 127297
Check the statistics. pg_stat_user_tables
and pg_stat_user_indexes
are the ones to start with.
See "The Statistics Collector".
Upvotes: 27
Reputation: 7705
On the determine missing indexes approach....Nope. But there's some plans to make this easier in future release, like pseudo-indexes and machine readable EXPLAIN.
Currently, you'll need to EXPLAIN ANALYZE
poor performing queries and then manually determine the best route. Some log analyzers like pgFouine can help determine the queries.
As far as an unused index, you can use something like the following to help identify them:
select * from pg_stat_all_indexes where schemaname <> 'pg_catalog';
This will help identify tuples read, scanned, fetched.
Upvotes: 22
Reputation: 18176
There are multiple links to scripts that will help you find unused indexes at the PostgreSQL wiki. The basic technique is to look at pg_stat_user_indexes
and look for ones where idx_scan
, the count of how many times that index has been used to answer queries, is zero, or at least very low. If the application has changed and a formerly used index probably isn't now, you sometimes have to run pg_stat_reset()
to get all the statistics back to 0 and then collect new data; you might save the current values for everything and compute a delta instead to figure that out.
There isn't any good tools available yet to suggest missing indexes. One approach is to log the queries you're running and analyze which ones are taking a long time to run using a query log analysis tool like pgFouine or pqa. See "Logging Difficult Queries" for more info.
The other approach is to look at pg_stat_user_tables
and look for tables that have large numbers of sequential scans against them, where seq_tup_fetch
is large. When an index is used the idx_fetch_tup
count is increased instead. That can clue you into when a table is not indexed well enough to answer queries against it.
Actually figuring out which columns you should then index on? That usually leads back to the query log analysis stuff again.
Upvotes: 5