Reputation: 2983
All,
I am running in a development environment and have a dozens of concurrent inserts going in to a table that has a primary key prohibits duplicate records to be inserted. This is exactly what its intended to do. My problem is that I was getting HUGE log files because each duplicate record was being reported to the log file as an ERROR or WARNING. Either way, it was being logged. I turned that off in the config file and now everything is being logged to pgstartup.log. This file gets to be over 20 gigs sometimes. How can I prevent this file fron growing so large too?
Thanks, Adam
Upvotes: 0
Views: 1554
Reputation: 4905
Postgres doesn't feature MERGE, INSERT IGNORE, nor UPSERT :(
Instead of a direct insert, though, you can use a surrogate MERGE or INSERT IGNORE (using CTE) to prevent duplicate inserts:
WITH upsert AS
(UPDATE "your table" yt SET .... WHERE yt.key=<KEY VALUE>
RETURNING yt.*
)
INSERT INTO "your table"
SELECT ... FROM "your table" WHERE key NOT IN (SELECT key FROM upsert);
r
Upvotes: 2
Reputation: 156188
You can probably use a log rotation tool, eg: logrotate(8)
to regularly truncate or compress the log file when it reaches a certain size or age.
Upvotes: 1