Sariq Shaikh
Sariq Shaikh

Reputation: 1124

pg_dump behaviour with -B option

Recently we had to dump and restore our postgres db from one cloud provider to another one. We have used pg_dump for that and below is the command.

pg_dump -F t -h HOST -U USER -d DB -f C:\Users\Shariq\Downloads\Prod-DB\noblobswithschema.tar -Z0 -v -B

It took hardly 10 minutes to generate the dump file and we were able to import it using pg_restore without any issues. As per documentation -B option is used to exclude large objects from the dump.

Below I am adding more details,

  1. DB size is 128 GB (based on pg_database_size) but overall table size are around 2 GB. And 2 GB is valid based on information we have in database. We ran VACUUM standard command which didn't make any difference to database size. We didn't run VACUUM FULL though.
  2. There is no bytea column in whole database which can be called large objects. There are some text columns which I was thinking can be also included as large objects but I am not sure.

Now the question is without -B flag in pg_dump it was taking forever to dump the database and also dump size was getting upto 300 GB. I have tried different flags such as -j for directory based backup, which was also taking forever. As we didn't have any column with datatype bytea we used -B option assuming other text datatype are not considered as large objects. Our restore has all the data including text columns. I am not sure why the pg_dump was taking forever without -B flag. If someone can shed some light on this.

Upvotes: 1

Views: 198

Answers (1)

Laurenz Albe
Laurenz Albe

Reputation: 247535

You are not using bytea after all, you are using large objects. That is a bad idea, as you are discovering. If you have enough large objects, you won't be able to dump or upgrade the database any more, see here and here. There is no known remedy for that. Get rid of large objetcs!

Upvotes: 0

Related Questions