ryan
ryan

Reputation: 85

Using PostgreSQL Rules/Triggers for debugging purposes

An application I am trying to support is currently running into unique constraint violations. I haven't been able to reproduce this problem in non-production environments. Is it reasonable, for debugging purposes, to create a rule (trigger?) that will in effect just copy every insert to a different table? So in effect the new table will be the same as the old table without a constraint, hopefully.

The application is using Spring to manage transactionality, and I haven't been able to find any documentation relating rules to transactions. After the violation, whatever is written so far in the transaction is rolled back - will this affect the rule in any way? This is Postgres 8.3.

Upvotes: 0

Views: 405

Answers (3)

Erwin Brandstetter
Erwin Brandstetter

Reputation: 656706

You can do almost anything you can imagine with rules and triggers. And then some more. Your exact intent remains somewhat unclear, though.

If the transaction is rolled back anyway, as you hint at the end, then everything will be undone, including all side-effects of any rules or triggers involved. Your plan would be futile.

There is a workaround for that in case that is, in fact, what you want to achieve: use dblink to link and INSERT to a table in the same database. That's not rolled back.

However, if it's just for debugging purposes, the database log is a much simpler way to see which duplicates have not been entered. Errors are logged by default. If not, you can set it up as you need it. See about your options in the manual.

Upvotes: 1

wildplasser
wildplasser

Reputation: 44240

As has been said, rules cannot be used for this purpose, as they only serve to rewrite the query. But rewritten query is just like the original one still part of the transaction.

Rules can be used to enforce constraints that are impossible to implement using regular constraints, such as a key being unique among several tables, or other multi-table stuff. (these do have the advantage of the "canary" tablename showing up in the logs and error messages) But the OP already had too many constraints, it appears...

Tweaking the serialisation level also seems indicated (are there multiple sessions involved? does the framework use a connection pool?)

Upvotes: 0

Frank Heikens
Frank Heikens

Reputation: 127086

After the violation, whatever is written so far in the transaction is rolled back - will this affect the rule in any way?

That will rollback everything the rule did, as well. You could create a trigger that uses dblink, to get some work done outside your current transaction. Another option could be a savepoint, but then you have to change all your current code and transaction.

Unique violations are logged in the logfiles as well, get this information to see what is going wrong. Version 9.0 has a change that will tell you also what the values are:

Improve uniqueness-constraint violation error messages to report the values causing the failure (Itagaki Takahiro) For example, a uniqueness constraint violation might now report Key (x)=(2) already exists.

Upvotes: 1

Related Questions