Reputation: 1235
I have a CTE query returning 750m records, these records need to be inserted into a target table. Using "select * into <> from " clause to parallelize the query part, but is there a way to parallelize the insert part? PostgreSQL version is 11.6.
Would having target table as partitioned help in parallelizing the insert? If so then how to run a CTAS for a partitioned table - lets say hash partitioned table?
Upvotes: 2
Views: 5751
Reputation: 7045
I tried 23 millions rows copy by "insert into/select from" clause in one shot and it took: 25m 3s
When I split those records into 10 different tables each have 2.3 millions rows and started 10 different "insert into/select from" query in 10 different sessions (different DataGrip tabs) it took: 11m 33s
I found this answer which claims different from what I experienced.
https://dba.stackexchange.com/questions/301426/parallelize-select-insert-for-postgresql
Upvotes: 0
Reputation: 5
Totally agree that MSSQL and Postgres features are far behind Oracle ones. My advice is to start using them in some real world tasks and then the issues will become obvious. My list is endless.. Inner SQL parallelism.. Partitioning.. Autnomous transactions.. Locking..
In MSSQL server readers block writers and oposite since it is born .. don't believe their snapshot isolation level will help as it is at transaction level...
In short.. be profesionals and do your homework well :)
Upvotes: -3
Reputation: 5131
There is no way to parallelize INSERTs in PostgreSQL, except by opening several database connections and using them in parallel.
Upvotes: 2