Henrik N
Henrik N

Reputation: 16294

Make a column NOT NULL in a large table without locking issues?

I want to change a column to NOT NULL:

ALTER TABLE "foos" ALTER "bar_id" SET NOT NULL

The "foos" table has almost 1 000 000 records. It does fairly low volumes of writes, but quite constantly. There are a lot of reads.

In my experience, changing a column in a big table to NOT NULL like this can cause downtime in the app, presumably because it leads to (b)locks.

I've yet to find a good explanation corroborating this, though.

And if it is true, what can I do to avoid it?

EDIT: The docs (via this comment) say:

Adding a column with a DEFAULT clause or changing the type of an existing column will require the entire table and its indexes to be rewritten.

I'm not sure if changing NULL counts as "changing the type of an existing column", but I believe I did have an index on the column the last time I saw this issue.

Perhaps removing the index, making the column NOT NULL, and then adding the index back would improve things?

Upvotes: 6

Views: 5792

Answers (1)

user330315
user330315

Reputation:

I think you can do that using a check constraint rather then set not null.

ALTER TABLE foos 
     add constraint id_not_null check (bar_id is not null) not valid;

This will still require an ACCESS EXCLUSIVE lock on the table, but it is very quick because Postgres doesn't validate the constraint (so it doesn't have to scan the entire table). This will already make sure that new rows (or changed rows) can not put a null value into that column

Then (after committing the alter table!) you can do:

alter table foos validate constraint id_not_null;

Which does not require an ACCESS EXCLUSIVE lock and still allows access to the table.

Upvotes: 6

Related Questions