Reputation: 2458
I want to periodically archive database dumps to my own AWS account in S3 and eventually in glacier. Is there a way to dump a postgresql database to the dynos filesystem from within the dyno (from where I then can send the file to AWS)? psql and pg_dump don't seem to be available on the dyno and I don't know how to run pgbackups from within a dyno.
Upvotes: 4
Views: 668
Reputation: 42023
Create a separate heroku app to do the backups, which uses the pgbackups-archive gem and then set up a heroku scheduler to run the pgbackups-archive gem periodically on your DATABASE_URL (you will need to import that environment variable from your other app), as described here.
Disclaimer: this nominally requires you to use some ruby, but works in conjunction with any heroku cedar app using heroku postgres (including django apps).
Upvotes: 2
Reputation: 2458
The best I could come up with now is using the pgbackups addon (which I had been using) and then daily pulling the latest backup from s3 and uploading it back up to my bucket. A PGBACKUPS_URL env variable is exposed by Heroku if this addon is enabled. The rest would go something like this:
# boto and requests are required, aws access credentials are in the settings file
url = settings.PGBACKUPS_URL + "/latest_backup"
dumpfile = "./db_dump"
# get latest backup info
r = requests.get(url)
dump_url = r.json()["public_url"]
dump_timestamp = r.json()["finished_at"].replace("/", "-")
dump_name = "db_dumps/" + dump_timestamp
# write dump to file
r = requests.get(dump_url, stream=True)
if r.status_code == 200:
with open(dumpfile, 'wb') as f:
for chunk in r.iter_content():
f.write(chunk)
conn = S3Connection(settings.AWS_ACCESS_KEY_ID, settings.AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket(settings.AWS_DB_DUMPS_BUCKET)
key = bucket.new_key(dump_name)
key.set_contents_from_filename(dumpfile)
I have yet to find out, wether a backup can be triggered somehow via the PGBACKUPS_URL.
Upvotes: 1