Johnny Metz
Johnny Metz

Reputation: 5965

Performance optimizations for write-heavy PostgreSQL table with indices

I have a write-heavy table in my PostgreSQL database. For every read, there are about 100k writes. This table includes multiple (5-6) indices which is killing performance because these indices are updated on every write.

Is there anyway to optimize performance of a write-heavy table with indices? I've come up with a few options:

Upvotes: 2

Views: 4044

Answers (2)

Laurenz Albe
Laurenz Albe

Reputation: 246578

Your second and third point sound like good options. If you have few reads, you shouldn't need that many indexes. Consider partitioning: it can eliminate the need for one index, and it makes getting rid of old data easy.

Upvotes: 3

Moshe Katz
Moshe Katz

Reputation: 16883

If your writes are all (or mostly) INSERT (not UPDATE) statements, and you can describe your data as "time-series data" - i.e. data about events that happen over time, you would probably benefit from the TimescaleDB PostgreSQL extension.

Timescale uses a concept they call "hypertables" which transparently breaks up your data into smaller chunks based on time (and other factors if you set them up). Performance can be significantly better than plain PostgreSQL while still supporting all PostgreSQL features.


I am not associated with TimescaleDB; I'm just a satisfied user.

Upvotes: 1

Related Questions