Reputation: 82
I have a database with several tables, let's say table_a
and table_b
.
I want to backup my base using pg_dump
, but (for various reasons) I want each table to have its own dumping file. I could do something like:
$ pg_dump -t table_a -f export_a.sql
$ pg_dump -t table_b -f export_b.sql
but consistency wouldn't be assured: modifications could happens between the two dumps, so my two dumps would represent differents states of the database, which is not consistent.
My question is: Is there any way to ensure these two dumps to be consistent (like, to be in the same transaction or something), or to tell pg_dump
to output each table of a single dump in its own file?
I have tried to use pg_dump -Fd -Z0
, but outputted .dat
don't seem readable enough to separate information of each table.
Upvotes: 0
Views: 2419
Reputation: 425
Add the --verbose
option to pg_dump, and redirect both stdout and stderr to a file.
As tables finish dumping, you'll see something like this in the output:
pg_dump: finished item 7463 TABLE DATA mc_request
A bit of bash
and awk
lets you rename the 9999.dat
files to "table-named" files. One problem, though, is when different schemas have same table name.
Upvotes: 0
Reputation: 246698
There is no direct way to do that. I have two ideas:
Use a single directory
format pg_dump
(-F d
) of all the tables. True, there will be a single toc.dat
with all the table metadata, but each table will be dumped to its own file.
Use a single pg_dump
in directory
or custom
format to get a consistent dump for all tables, and then create individual files from it with
pg_restore -t table1 -f table1.sql all_tables.dmp
Upvotes: 4