Aqeeb Imtiaz Harun
Aqeeb Imtiaz Harun

Reputation: 167

Odoo 10 is not backing up DB in PostgreSQL 9.5. Shows "SQL state: 22008. Timestamp out of range on account_bank_statement_line."

At our company we had a DB crash a few days ago due to hardware reasons. We recovered from that but since then we're having this following error every time we try to back up our DB.

pg_dump: ERROR:  timestamp out of range
pg_dump: SQL command to dump the contents of table "account_bank_statement_line"

The error is in "account_bank_statement_line" table, where we have 5 rows created with only the 'create_date' column has a date of year 4855(!!!!), the rest of the columns have null value, even the id (primary key). We can't even delete or update those rows using pgAdmin 4 or PostgreSQL terminal.

We're in a very risky stage right now with no back up of few days of retail sales. Any hints at least would be very highly appreciated.

Upvotes: 1

Views: 99

Answers (2)

Aqeeb Imtiaz Harun
Aqeeb Imtiaz Harun

Reputation: 167

I know what we did might not be the most technically advanced, but it solved our issue. We consulted a few experts and what we did was:

  • migrated all the data to a new table (account_bank_statement_line2), this transferred all the rows that had valid data.
  • Then we deleted the "account_bank_statement_line" table and

  • renamed the new table to "account_bank_statement_line".

  • After that we could DROP the table.

Then the db backup ran smoothly like always.

Hope this helps anyone who's in deep trouble like us. Cheers!

Upvotes: 1

Laurenz Albe
Laurenz Albe

Reputation: 246473

First, if the data are important, hire a specialist.

Second, run your pg_dump with the option --exclude-table=account_bank_statement_line so that you at least have a backup of the rest of your database.

The next thing you should do is to stop the database and take a cold backup of all the files. That way you have something to go back to if you mess up.

The key point to proceed is to find out the ctids (physical addresses) of the problematic rows. Then you can use that to delete the rows.

You can approach that by running queries like

SELECT create_date FROM account_bank_statement_line
WHERE ctid < '(42,0)';

and try to find the ctids where you get an error. Once you have found a row where the following falls over:

SELECT * FROM account_bank_statement_line
WHERE ctid = '(42,14)';

you can delete the row by its ctid.

Once you are done, take a pg_dumpall of the database cluster, create a new one and restore the dump. It is dangerous to continue working with a cluster that has experienced corruption, because corruption can remain unseen and spread.

Upvotes: 2

Related Questions