Reputation: 23500
I've set up a table accordingly:
CREATE TABLE raw (
id SERIAL,
regtime float NOT NULL,
time float NOT NULL,
source varchar(15),
sourceport INTEGER,
destination varchar(15),
destport INTEGER,
blocked boolean
); ... + index and grants
I've successfully used this table for a while now, and all of a sudden the following insert doesn't work any longer..
INSERT INTO raw(
time, regtime, blocked, destport, sourceport, source, destination
) VALUES (
1403184512.2283964, 1403184662.118, False, 2, 3, '192.168.0.1', '192.168.0.2'
);
The error is: ERROR: integer out of range
Not even sure where to begin debugging this.. I'm not out of disk-space and the error itself is kinda discreet.
Upvotes: 65
Views: 182466
Reputation: 384474
You may also need to do some typecasts to get rid of the error due to operation overflows
I just wanted to illustrate that in some cases, the issue is not just that you need a BIGINT
column, but that some intermediate operation is overflowing.
For example, the following gives ERROR: integer out of range
, even though we made the column BIGINT
:
CREATE TABLE tmp(i BIGINT);
INSERT INTO tmp SELECT 2147483647 + 1;
The problem is that 2147483647 = 2**31 - 1
, the maximum integer that fits into INTEGER
, so when we add 1
it overflows and we get the error.
The same issue happens if we just SELECT
without any tables involved:
SELECT 2147483647 + 1;
To solve the issue, we could typecast either as:
SELECT 2147483647::BIGINT + 1;
or as:
SELECT 2147483647 + 1::BIGINT;
so we understand that so long one of the operators is BIGINT
, the result gets implicitly typecast without error.
It is also worth noting that
SELECT 2147483648 + 1;
does not give any error because when we use a 2147483648
literal, that doesn't fit into INTEGER
, so PostgreSQL assumes it is BIGINT
by default.
Another case where this might come up is when using generate_series
to generate some large test data, this is what brought me here in the first place, e.g.:
SELECT i + 1 FROM generate_series(2147483647, 2147483647) AS s(i);
gives the error for similar reasons as above, because if the arguments of generate_series
are INTEGER
, then so are the returned values. One good clean solution in this case is to typecast the arguments of generate_series
to BIGINT
as in:
SELECT i + 1 FROM generate_series(2147483647::BIGINT, 2147483647::BIGINT) AS s(i);
Tested on PostgreSQL 16.6, Ubuntu 24.04.1.
Upvotes: 1
Reputation: 21385
SERIAL
columns are stored as INTEGER
s, giving them a maximum value of 231-1. So after ~2 billion inserts, your new id
values will no longer fit.
If you expect this many inserts over the life of your table, create it with a BIGSERIAL
(internally a BIGINT
, with a maximum of 263-1).
If you discover later on that a SERIAL
isn't big enough, you can increase the size of an existing field with:
ALTER TABLE raw ALTER COLUMN id TYPE BIGINT;
Note that it's BIGINT
here, rather than BIGSERIAL
(as serials aren't real types). And keep in mind that, if you actually have 2 billion records in your table, this might take a little while...
Upvotes: 100