John Drouhard
John Drouhard

Reputation: 1259

PostgreSQL multiple nullable columns in unique constraint

We have a legacy database schema that has some interesting design decisions. Until recently, we have only supported Oracle and SQL Server, but we are trying to add support for PostgreSQL, which has brought up an interesting problem. I have searched Stack Overflow and the rest of the internet and I don't believe this particular situation is a duplicate.

Oracle and SQL Server both behave the same when it comes to nullable columns in a unique constraint, which is to essentially ignore the columns that are NULL when performing the unique check.

Let's say I have the following table and constraint:

CREATE TABLE EXAMPLE
(
    ID TEXT NOT NULL PRIMARY KEY,
    FIELD1 TEXT NULL,
    FIELD2 TEXT NULL,
    FIELD3 TEXT NULL,
    FIELD4 TEXT NULL,
    FIELD5 TEXT NULL,
    ...
);

CREATE UNIQUE INDEX EXAMPLE_INDEX ON EXAMPLE
(
    FIELD1 ASC,
    FIELD2 ASC,
    FIELD3 ASC,
    FIELD4 ASC,
    FIELD5 ASC
);

On both Oracle and SQL Server, leaving any of the nullable columns NULL will result in only performing a uniqueness check on the non-null columns. So the following inserts can only be done once:

INSERT INTO EXAMPLE VALUES ('1','FIELD1_DATA', NULL, NULL, NULL, NULL );
INSERT INTO EXAMPLE VALUES ('2','FIELD1_DATA','FIELD2_DATA', NULL, NULL,'FIELD5_DATA');
-- These will succeed when they should violate the unique constraint:
INSERT INTO EXAMPLE VALUES ('3','FIELD1_DATA', NULL, NULL, NULL, NULL );
INSERT INTO EXAMPLE VALUES ('4','FIELD1_DATA','FIELD2_DATA', NULL, NULL,'FIELD5_DATA');

However, because PostgreSQL (correctly) adheres to the SQL Standard, those insertions (and any other combination of values as long as one of them is NULL) will not throw an error and be inserted correctly no problem. Unfortunately, because of our legacy schema and the supporting code, we need PostgreSQL to behave the same as SQL Server and Oracle.

I am aware of the following Stack Overflow question and its answers: Create unique constraint with null columns. From my understanding, there are two strategies to solve this problem:

  1. Create partial indexes that describe the index in cases where the nullable columns are both NULL and NOT NULL (which results in exponential growth of the number of partial indexes)
  2. Use COAELSCE with a sentinel value on the nullable columns in the index.

The problem with (1) is that the number of partial indexes we'd need to create grows exponentially with each additional nullable column we'd like to add to the constraint (2^N if I am not mistaken). The problems with (2) are that sentinel values reduces the number of available values for that column and all of the potential performance problems.

My question: are these the only two solutions to this problem? If so, what are the tradeoffs between them for this particular use case? A good answer would discuss the performance of each solution, the maintainability, how PostgreSQL would utilize these indexes in simple SELECT statements, and any other "gotchas" or things to be aware of. Keep in mind that 5 nullable columns was only for an example; we have some tables in our schema with up to 10 (yes, I cry every time I see it, but it is what it is).

Upvotes: 22

Views: 13963

Answers (4)

Erwin Brandstetter
Erwin Brandstetter

Reputation: 656221

Postgres 15 adds the clause NULLS NOT DISTINCT

See:

The solution is very simple now:

ALTER TABLE example ADD CONSTRAINT foo
UNIQUE NULLS NOT DISTINCT (field1, field2, field3, field4, field5);

For Postgres 14 or older

You are striving for compatibility with your existing Oracle and SQL Server implementations.

Since Oracle does not implement NULL values at all in row storage, it can't tell the difference between an empty string and NULL anyway. So wouldn't it be prudent to use empty strings ('') instead of NULL values in Postgres as well - for this particular use case?

Define columns included in the unique constraint as NOT NULL DEFAULT '', problem solved:

CREATE TABLE example (
   example_id serial PRIMARY KEY
 , field1 text NOT NULL DEFAULT ''
 , field2 text NOT NULL DEFAULT ''
 , field3 text NOT NULL DEFAULT ''
 , field4 text NOT NULL DEFAULT ''
 , field5 text NOT NULL DEFAULT ''
 , CONSTRAINT foo UNIQUE (field1, field2, field3, field4, field5)
);

Notes

What you demonstrate in the question is a unique index:

CREATE UNIQUE INDEX ...

Not the unique constraint you keep talking about. There are subtle, important differences!

I changed that to an actual constraint like in the title of the question.

The keyword ASC is just noise, since that is the default sort order. I dropped it.

Using a serial PK column for simplicity which is totally optional but typically preferable to numbers stored as text.

Working with it

Just omit empty / null fields from the INSERT:

INSERT INTO example(field1) VALUES ('F1_DATA');
INSERT INTO example(field1, field2, field5) VALUES ('F1_DATA', 'F2_DATA', 'F5_DATA');

Repeating any of theses inserts would violate the unique constraint.

Or if you insist on omitting target columns (which is a bit of anti-pattern in persisted INSERT statements),
or for bulk inserts where all columns need to be listed:

INSERT INTO example VALUES
  ('1', 'F1_DATA', DEFAULT, DEFAULT, DEFAULT, DEFAULT)
, ('2', 'F1_DATA','F2_DATA', DEFAULT, DEFAULT,'F5_DATA')
;

Or simply:

INSERT INTO example VALUES
  ('1', 'F1_DATA', '', '', '', '')
, ('2', 'F1_DATA','F2_DATA', '', '','F5_DATA')
;

Or you can write a trigger BEFORE INSERT OR UPDATE that converts NULL to ''.

Alternative solutions

If you need to use actual NULL values I would suggest the unique index with COALESCE like you mentioned as option (2) and @wildplasser provided as his last example.

The index on an array like @Rudolfo presented is simple, but considerably more expensive. Array handling isn't very cheap in Postgres and there is an array overhead similar to that of a row (24 bytes):

Arrays are limited to columns of the same data type. You could cast all columns to text if some are not, but it will typically further increase storage requirements. Or you could use a well-known row type for heterogeneous data types ...

A corner case: array (or row) types with all NULL values are considered equal (!), so there can only be 1 row with all involved columns NULL. May or may not be as desired. If you want to disallow all columns NULL:

Upvotes: 15

Rodolfo Hansen
Rodolfo Hansen

Reputation: 176

This actually worked well for me:

CREATE UNIQUE INDEX index_name ON table_name ((
   ARRAY[field1, field2, field3, field4]
));

I don't know about how performance is affected, but it should be close to ideal (depending on how well optimized arrays are in postres)

Upvotes: 5

wildplasser
wildplasser

Reputation: 44220

Third method: use IS NOT DISTINCT FROM insted of = for comparing the key columns. (This could make use of the existing index on the candidate natural key) Example (look at the last column)

SELECT *
    , EXISTS (SELECT * FROM example x
     WHERE x.FIELD1 IS NOT DISTINCT FROM e.FIELD1
     AND x.FIELD2 IS NOT DISTINCT FROM e.FIELD2
     AND x.FIELD3 IS NOT DISTINCT FROM e.FIELD3
     AND x.FIELD4 IS NOT DISTINCT FROM e.FIELD4
     AND x.FIELD5 IS NOT DISTINCT FROM e.FIELD5
     AND x.ID <> e.ID
    ) other_exists
FROM example e
    ;

Next step would be to put that into a trigger function, and put a trigger on it. (don't have the time now, maybe later)


And here is the trigger-function (which is not perfect yet, but appears to work):


CREATE FUNCTION example_check() RETURNS trigger AS $func$
BEGIN
    -- Check that empname and salary are given
    IF EXISTS (
     SELECT 666 FROM example x
     WHERE x.FIELD1 IS NOT DISTINCT FROM NEW.FIELD1
     AND x.FIELD2 IS NOT DISTINCT FROM NEW.FIELD2
     AND x.FIELD3 IS NOT DISTINCT FROM NEW.FIELD3
     AND x.FIELD4 IS NOT DISTINCT FROM NEW.FIELD4
     AND x.FIELD5 IS NOT DISTINCT FROM NEW.FIELD5
     AND x.ID <> NEW.ID
            ) THEN
        RAISE EXCEPTION 'MultiLul BV';
    END IF;


    RETURN NEW;
END;
$func$ LANGUAGE plpgsql;

CREATE TRIGGER example_check BEFORE INSERT OR UPDATE ON example
  FOR EACH ROW EXECUTE PROCEDURE example_check();

UPDATE: a unique index can sometimes be wrapped into a constraint (see postgres-9.4 docs, final example ) You do need to invent a sentinel value; I used the empty string '' here.


CREATE UNIQUE INDEX ex_12345 ON example
        (coalesce(FIELD1, '')
        , coalesce(FIELD2, '')
        , coalesce(FIELD3, '')
        , coalesce(FIELD4, '')
        , coalesce(FIELD5, '')
        )
        ;

ALTER TABLE example
        ADD CONSTRAINT con_ex_12345
        USING INDEX ex_12345;

But the "functional" index on coalesce() is not allowed in this construct. The unique index (OP's option 2) still works, though:


ERROR:  index "ex_12345" contains expressions
LINE 2:  ADD CONSTRAINT con_ex_12345
             ^
DETAIL:  Cannot create a primary key or unique constraint using such an index.
INSERT 0 1
INSERT 0 1
ERROR:  duplicate key value violates unique constraint "ex_12345"

Upvotes: 8

Vao Tsun
Vao Tsun

Reputation: 51406

You can create a rule to insert ALL NULL values instead of original table to partitions like partition_field1_nullable, partition_fiend2_nullable, etc. This way you create a unique index on original table only (with no nulls). This will allow you to insert not null only to orig table (having uniqness), and as many not null (and not unique accordingly) values to "nullable partitions". And you can apply COALESCE or trigger method against nullable partitions only, to avoid many scattered partial indexes and trigger against every DML on original table...

Upvotes: 0

Related Questions