Konrad Garus
Konrad Garus

Reputation: 54005

How to prevent table from being dropped?

In PostgreSQL, how can I prevent anyone (including superusers) from dropping some specific table?

EDIT: Whoa, did we have some misunderstanding here. Let's say there is a big, shared QA database. Sometimes people run destructive things like hibernate-generated schema on it by mistake, and I'm looking for ways to prevent such mistakes.

Upvotes: 5

Views: 4766

Answers (5)

Alexi Theodore
Alexi Theodore

Reputation: 1667

I don't know what the real original intention of this question is... but a lot of people seem to have hypothetical answers like trusting your peers or using least permissions models appropriately. Personally, this misses the point altogether and instead answers with something everyone probably knew already, which isn't particularly helpful.

So let me attempt a question + answer of my own: How do you put in safety locks to prevent yourself or others from accidentally doing something you shouldn't? If you think that this "should never happen" then I think your imagination is too narrow. Or perhaps you are more perfect than me (and possibly a lot of other people).

For the rest of us, here is a solution that works for me. It is just a little lock that is put wherever you want it to - using event triggers. Obviously, this would be implemented by a super-user of some sort. But the point of it is that it has no bearing on permissions because it is error-based not permission based.

Obviously, you shouldn't implement this sort of thing if production behavior is dependent on it. It only makes sense to use in situations where it makes sense to use. Don't use it to replace what should be solved using permissions and don't use it to undermine your team. Use common sense - individual results may vary.


CREATE SCHEMA testst;

CREATE OR REPLACE FUNCTION manual_override_required() RETURNS event_trigger AS
$$
DECLARE
    obj record;
BEGIN
FOR obj IN SELECT * FROM pg_event_trigger_dropped_objects()
LOOP
    RAISE INFO 'object_oid: %, object_type: %', obj.objid, obj.object_type;
    RAISE info '%', obj.object_name;
    IF obj.object_type = 'schema' and obj.object_name = 'testst' THEN
        RAISE EXCEPTION 'You have attempted to DROP something which has been hyper-locked and requires manual override to proceed.';
    END IF;
END LOOP;
END
$$
LANGUAGE plpgsql
;


DROP EVENT TRIGGER IF EXISTS lock_schema;
CREATE EVENT TRIGGER lock_schema
   ON sql_drop
EXECUTE FUNCTION manual_override_required();

DROP SCHEMA testst;

-- produces error: "ERROR: You have attempted to DROP something which has been admin-locked and requires manual override to proceed."


-- To override the admin-lock (you have the permission to do this, it just requires two turns of a key and positive confirmation):

ALTER EVENT TRIGGER lock_schema DISABLE;


DROP SCHEMA testst;
-- now it works!


An example of how I use this is in automation workflows. I have to switch between dev and prod environments a dozen times a day and I can (i.e. have) easily lost track of which is which despite the giant flags and banners I've put up to remind myself. Dropping certain schemas should be a rare and special event in Prod (thus the benefit of a active-confirmation approach) whereas in dev I rebuild them all the time. If I maintain the same permission structures in Dev as in Prod (which I do) then I wouldn't be able to solve this.

Upvotes: 2

Craig Ringer
Craig Ringer

Reputation: 324375

A superuser is precisely that. If you don't want them to be able to drop things, don't make them a superuser.

There's no need to let users run as superusers pretty much ever. Certainly not automated tools like schema migrations.

Your applications should connect as users with the minimum required user rights. They should not own the tables that they operate on, so they can't make schema changes to them or drop them.

When you want to make schema changes, run the application with a user that does have ownership of the tables of interest, but is not a superuser. The table owner can drop and modify tables, but only the tables it owns.

If you really, truly need to do something beyond the standard permissions model you will need to write a ProcessUtility_hook. See this related answer for a few details on that. Even then a superuser might be able to get around it by loading an extension that skips your hook, you'll just slow them down a bit.

Don't run an application as a superuser in production. Ever.

See the PostgreSQL documentation on permissions for more guidance on using the permissions model.

Upvotes: 2

Peter Eisentraut
Peter Eisentraut

Reputation: 36729

You can do that by writing some C code that attaches to ProcessUtility_hook. If you have never done that sort of thing, it won't be exactly trivial, but it's possible.

Another option might be looking into sepgsql, but I don't have any experience with that.

Upvotes: 3

LTKD
LTKD

Reputation: 214

I don't think you can do that. You could perhaps have super super users who are going to manage the dropping of everything first. OR have backups constantly, so the higher member of the hierarchy will always have the possibility of retrieving the table.

Upvotes: 1

ta.speot.is
ta.speot.is

Reputation: 27214

anyone (including superusers) from dropping some specific table?

Trust your peers.

Upvotes: 3

Related Questions