Reputation: 6741
I have hundreds of tables which are configured with update triggers.
I was wondering what will be a better approach:
1. Create per table trigger function where the trigger code (which its logic is the same for all tables) is specific for the table.
2. Create a global function which knows to handle all tables by creating dynamic sql statements and configure it as all the tables' trigger function.
I was wondering if the function per tables will work faster since the pgsql can be pre-compiled and reused the function while the global function needs to create the sql statements dynamically by the table name on each call.
To make it more clear, in the per table function I can write for TableA :
insert into log_table values('TableA', x, y, z)
while in the global one I will need to write it as:
EXECUTE 'insert into log_table values(' || current_table || ', x, y, z)'
Upvotes: 1
Views: 157
Reputation: 28571
It may be better not to use dynamic sql because of execution plan caching.
See the section "39.10.2. Plan Caching" here.
But only real testing will show performance difference, if any.
Upvotes: 1