Alex R
Alex R

Reputation: 11891

Can PostgreSQL be configured so that occasional mass updates can run super-fast?

I have setup a PostgreSQL test environment which is required to contain the same amount of data (number of rows) as the production database, and be configured mostly production-like to simulate the same performance for normal transactions.

However, it being a test environment, occasionally has to have some unique, experimental, temporary, or ad-hoc changes applied. For instance, adding or removing some indexes before a performance test, recalculating the value of a column to replicate test conditions, dumping and reimporting whole tables, etc.

Is there a way to temporarily suspend data integrity guarantees in order to perform such types of mass update as fast as possible?

For example, in MySQL you can set over-sized write buffers, disable transaction logging, and suspend disk flushes on transaction commit. Is there something similar in pgsql?

The deployment environment is AWS EC2.

Upvotes: 1

Views: 621

Answers (2)

user330315
user330315

Reputation:

The manual has a chapter dedicated to initial loading of a database.

There are some safe options you can change to make things faster:

Then there are some rather unsafe options to make things faster. Unsafe meaning: you can lose all your data if the server crashes - so use at your own risk!

Again: by changing the two settings above, you risk losing all your data.

To disable WAL you could also set all tables to unlogged

Upvotes: 7

Laurenz Albe
Laurenz Albe

Reputation: 247215

You can disable WAL logging with ALTER TABLE ... SET UNLOGGED, but be aware that the reverse operation will dump the whole table to WAL.

If that is not feasible, you can boost performance by setting max_wal_size hugh so that you get fewer checkpoints.

WAL flushing is disabled by setting fsync = off.

Be aware that the first and third measure will wreck your database in the event of a crash.

Upvotes: 2

Related Questions