JaktensTid
JaktensTid

Reputation: 330

Best way to organize fast read of entire table for 10 and more connections in postgres

I have a table with about 200k-400k records in it, 6 columns. Size of it when dumped to python's json array not exceeds 16MB. There are 20 connections, that at certain period of time are going to read ENTIRE table. How can I optimize this process? Is there is a way to 'prepare' this data (maybe cache it in the RAM directly)? Can you propose best solution for such tables?

Upvotes: 0

Views: 255

Answers (1)

Laurenz Albe
Laurenz Albe

Reputation: 246718

PostgreSQL will cache the table in RAM the first time it is read.

An exception to this is if the table is bigger than a quarter of shared_buffers: then PostgreSQL will use a small ring buffer for the sequential scan to avoid blowing out the whole cache. But that seems unlikely with a small table like that.

If you want to speed up the first scan of the table too, you can load it into RAM explicitly:

CREATE EXTENSION pg_prewarm;

SELECT pg_prewarm('table_name');

Upvotes: 1

Related Questions