Reputation: 35473
Suppose, there's some trigger in the database with a function, like this:
-- Insert a new entry into another table
-- every time a NEW row is inserted
CREATE FUNCTION trgfunc_write_log() RETURNS TRIGGER AS $$
BEGIN
INSERT INTO some_other_table (
-- some columns
meter_id,
date_taken,
temperature,
) values (
NEW.meter_id,
NEW.time_taken,
NEW.temperature
);
return NEW;
END;
$$ language 'plpgsql';
-- The trigger itself: AFTER INSERT
CREATE TRIGGER trg_temperature_readings
AFTER INSERT ON temperature_readings
FOR EACH ROW
EXECUTE FUNCTION trgfunc_write_log();
Typically, this trigger will live next to my SqlAlchemy models and be auto-created with something like this:
from sqlalchemy import DDL, event
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Reading(Base):
...
create_trigger = DDL(""" ...SQL... """)
event.listen(Reading.__table__, 'after_create', create_trigger)
What's your best practice for version-controlling such a trigger and its function with Alembic migrations?
Upvotes: 11
Views: 7692
Reputation: 1659
This is straightforward using Alembic Utils (pip install alembic_utils
).
Create your function in your normal codebase, e.g.
from alembic_utils.pg_function import PGFunction
from alembic_utils.pg_trigger import PGTrigger
trgfunc_write_log = PGFunction(
schema="public",
signature="trgfunc_write_log()",
definition="""
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO some_other_table (
-- some columns
meter_id,
date_taken,
temperature,
) values (
NEW.meter_id,
NEW.time_taken,
NEW.temperature
);
return NEW;
END;
$$ language 'plpgsql'
""")
trg_temperature_readings = PGTrigger(
schema="public",
signature="trg_temperature_readings",
on_entity="public.temperature_readings",
is_constraint=False,
definition="""AFTER INSERT ON temperature_readings
FOR EACH ROW
EXECUTE FUNCTION trgfunc_write_log()""",
)
The docs show how to modify your alembic ini file and env.py file - one gotcha is you have to register the entities e.g. in env.py
:
from alembic_utils.replaceable_entity import register_entities
from app.db.function import trg_temperature_readings, trgfunc_write_log
register_entities([trg_temperature_readings, trgfunc_write_log])
Then alembic's auto generated migrations should work as normal:
alembic revision --autogenerate -m 'add temperature log trigger'
.
Upvotes: 6
Reputation: 4750
I couldn't make
alembic_utils
work, probably because my functions and views were normalized by the DBMS once created, and every time were falsely detected as "changed" by autogenerate. I also couldn't make some of the functions work withalembic_utils
because it tries to process the SQL somehow, before executing, and fails to do that.
So I made an extension for Alembic (Alembic Dumb DDL), that keeps the local states of each my custom DDL script, and compares the main scripts to them (instead of the DB state).
This way my DDL scripts are defined in a single place (which now allows to see the proper diffs in the git history), and autogenerate command now supports any kind of script regardless of complexity or used DBMS.
Also the revisions now look much prettier because they just call the local states of the scripts, instead of featuring the full source codes of the views/functions/triggers.
def upgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.run_ddl_script("2024_01_08_1045_order_details_060d60b5c278.sql")
op.run_ddl_script("2024_01_08_1045_customer_details_060d60b5c278.sql")
# ### end Alembic commands ###
def downgrade() -> None:
# ### commands auto generated by Alembic - please adjust! ###
op.execute("DROP VIEW IF EXISTS customer_details;")
op.run_ddl_script("2024_01_08_1016_order_details_af80846764cd.sql")
Upvotes: 0
Reputation: 385
I recently had the same question come up in an application and found this article in the Alembic Cookbook.
It outlines a somewhat complex strategy of creating an object that encapsulates the name and SQL used to create a view, stored procedure, or trigger among other objects used to perform the Alembic operations to upgrade and downgrade that schema object. It looks something like this when used in an Alembic revision:
from alembic import op
from my_module import ReplaceableObject
my_trigger = ReplaceableObject(
"trigger_name",
"""...SQL..."""
)
def upgrade():
op.create_trigger(my_trigger)
def downgrade():
op.drop_trigger(my_trigger)
My team is currently discussing if this strategy is too complex for a simple trigger compared to a view or stored proc. You may update those schema objects more frequently making much of the behavior outlined in the Cookbook abstractions more valuable than with a simple trigger.
Another proposed option was something like this:
from alembic import op
create_trigger = """...SQL..."""
drop_trigger = """...SQL..."""
def upgrade():
op.execute(create_trigger)
def downgrade():
op.execute(drop_trigger)
The two implementations look almost identical, which is the argument for the Cookbook abstraction being unnecessarily complex for a simple trigger.
Upvotes: 11