Stavros Korokithakis
Stavros Korokithakis

Reputation: 4956

Does anyone know of a good way to back up postgres databases?

I have a script that will produce daily rotated backups for mysql, but I can't find anything similar for postgres. I have also discovered that it has an online backup capability, which should come in handy since this is a production site.

Does anyone know of a program/script that will help me, or even a way to do it?

Thanks.

Upvotes: 11

Views: 15280

Answers (7)

Techie
Techie

Reputation: 45124

This is what I would do to backup my old database and restore

To back up your database

pg_dump --format=c olddb_name > db_dump_file.dump

To restore that backup

pg_restore -v -d newdb_name db_dump_file.dump

Read more on pg_dump and pg_restore

Upvotes: 0

Vladimir Dyuzhev
Vladimir Dyuzhev

Reputation: 18336

I've just stumbled upon this little neat utility:

http://code.google.com/p/pg-rman/

From the documentation it looks promising, but I yet have to try it.

Upvotes: 0

chotchki
chotchki

Reputation: 4343

Since you specified databaseS, pg_dumpall will be far more useful to you. It dumps all databases and users to a sql file instead of just a single one.

Upvotes: 3

Vitaly Kushner
Vitaly Kushner

Reputation: 9455

For automated backups of both MySQL And Postrgres check out astrails-safe on github (or just "gem install astrails-safe --source=http://gems.github.com"). It uses mysqldump to backup MySQL and pg_dump to backup Postgres. It also knows how to backup plain files with tar and encrypt everything with GnuPG and upload to S3, or any Unix server with SFTP.

Upvotes: 2

user80168
user80168

Reputation:

Generally the way to do backups is to use pg_dump.

You shouldn't "copy files from postgresql directory, just like in mysql" - because chances are you will not be able to use them (these files are architecture, operating system, and compile-options dependent).

Unless pg_dump is proven to be insufficient - this is what you should use. After you will be in situation that pg_dump cannot be used - you should ask yourself: why it can't be used, and what can you do to use it again :)

When using pg_dump you might choose plain SQL file dump (-F p), or custom format (-F c). SQL dump is easier to modify/change, but the custom format is much more powerful, and (since 8.4) faster to load, because you can load it in many parallel workers instead of sequentially.

Upvotes: 1

Randell
Randell

Reputation: 6170

You can also dump your PostgreSQL database using phpPgAdmin or pgAdmin III.

Upvotes: 0

Adam Batkin
Adam Batkin

Reputation: 52994

One way is to use pg_dump to generate a flat sql dump, which you can gzip or whatever. This is certainly the easiest option, as the results can be piped back in to psql to re-load a database, and since it can also export as plain text, you can look through or edit the data prior to restore if necessary.

The next method is to temporarily shut down your database (or if your filesystem supports atomic snapshots, in theory that might work) and backup your PostgreSQL data directory.

This page from the PostgreSQL site also explains how to do online backups and point-in-time recovery, which is definitely the most difficult to configure, but also the optimal method. The idea is that you perform a base backup (which you might do every day, couple of days or week) by running some special SQL (pg_start_backup and pg_stop_backup) and make a (filesystem-level) copy of your database directory. The database doesn't go offline during this time, and everything still works as normal. From then on, the database generates a Write Ahead Log (WAL) of any changes, which can then be pushed (automatically, by the database) to wherever you want. To restore, you take the base backup, load it into another database instance, then just replay all the WAL files. This way you can also do point-in-time recovery by not replaying all of the logs.

Upvotes: 20

Related Questions