Reputation: 5002
I have read that when having a lot of indexes on a database It can seriously hurt the performance but in the PostgreSQL doc I can't find anything about it.
I have a very big table with something like 100 columns and a billion rows and often I have to do a lot of searches in a lot of different fields.
Does the performance of the PostgreSQL table will drop if I add a lot of indexes (maybe 10 unique column indexes and 5 or 7 3 column indexes)?
EDIT: With performance drop I mean the performance in fetching rows (select), the database will be updated once a month so the update and insert time are not an issue.
Upvotes: 14
Views: 6790
Reputation: 31467
The indexes are maintained when the content of the table has been modified (i.e. INSERT
, UPDATE
, DELETE
)
The query planner of PostgreSQL can decide when to use an index and when it's not needed and a sequential scan is more optimal.
So having too many indexes will hurt the modifying performance, not the fetching.
It is important to note though that a newly created index might not be used until a VACCUM
has been performed on the table to get the statistics that are necessary for the query planner.
Upvotes: 15
Reputation: 8746
I have some charts about that on my site: http://use-the-index-luke.com/sql/dml
An index is pure redundancy. It contains only data that is also stored in the table. During write operations, the database must keep those redundancies consistent. Specifically, it means that insert, delete and update not only affect the table but also the indexes that hold a copy of the affected data.
The chapter titles suggest the impact that indexes can have:
Insert — cannot take direct benefit from indexes
Delete — uses indexes for the where clause
Update — does not affect all indexes of the table
Upvotes: 3
Reputation: 125304
The indexes will have to be updated at each insert
and update
involving those columns.
Upvotes: 2