TinkerTank
TinkerTank

Reputation: 5815

Storing locally encrypted incremental ZFS snapshots in Amazon Glacier

To have truly off-site and durable backups of my ZFS pool, I would like to store zfs snapshots in Amazon Glacier. The data would need to be encrypted locally, independently from Amazon, to ensure privacy. How could I accomplish this?

Upvotes: 9

Views: 2662

Answers (1)

TinkerTank
TinkerTank

Reputation: 5815

An existing snapshot can be sent to a S3 bucket as following:

zfs send -R <pool name>@<snapshot name> | gzip | gpg --no-use-agent  --no-tty --passphrase-file ./passphrase -c - | aws s3 cp - s3://<bucketname>/<filename>.zfs.gz.gpg

or for incremental back-ups:

zfs send -R -I <pool name>@<snapshot to do incremental backup from> <pool name>@<snapshot name> | gzip | gpg --no-use-agent  --no-tty --passphrase-file ./passphrase -c - | aws s3 cp - s3://<bucketname>/<filename>.zfs.gz.gpg

This command will take an existing snapshot, serialize it with zfs send, compress it, and encrypt it with a passphrase with gpg. The passphrase must be readable on the first line in the ./passphrase file.

Remember to back-up your passphrase-file separately in multiple locations! - If you lose access to it, you'll never be able to get to your data again!

This requires:

  • A pre-created Amazon s3 bucket
  • awscli installed (pip install awscli) and configured (aws configure).
  • gpg installed

Lastly, S3 lifecycle rules can be used to transition the S3 object to glacier after a pre-set amount of time (or immediately).


For restoring:

aws s3 cp s3://<bucketname>/<filename>.zfs.gz.gpg - | gpg --no-use-agent --passphrase-file ./passphrase -d - | gunzip | sudo zfs receive <new dataset name> 

Upvotes: 12

Related Questions