Reputation: 115
Trying to save a PostgreSQL backup (~20 Tb) to Google Cloud Storage for the long-term, and I am currently piping PostgreSQL pg_dump()
command to an streaming transfer through gsutil
.
pg_dump -d $DB_NAME -b --format=t \
| gsutil cp - gs://$BUCKET_NAME/$BACKUP_FILE
However, I am worried that the process will crash because of GCS' 5Tb object size limit.
Is there any way to upload larger than 5Tb objects to Google Cloud Storage?
split
?I am considering to pipe pg_dump
to Linux's split
utility and the gsutil cp
.
pg_dump -d $DB -b --format=t \
| split -b 50G - \
| gsutil cp - gs://$BUCKET/$BACKUP
Would something like that work?
Upvotes: 1
Views: 956
Reputation: 38369
You generally don't want to upload a single object in the multi-terabyte range with a streaming transfer. Streaming transfers have two major downsides, and they're both very bad news for you:
Instead, here's what I would suggest:
Upvotes: 2
Reputation: 801
As mentioned by Ferregina Pelona, guillaume blaquiere and John Hanley. There is no way to bypass the 5-TB limit implemented by Google, as mentioned in this document:
Cloud Storage 5TB object size limit
Cloud Storage supports a maximum single-object size up to 5 terabytes. If you have objects larger than 5TB, the object transfer fails for those objects for either Cloud Storage or Transfer for on-premises.
If the file surpasses the limit (5 TB), the transfer fails.
You can use Google's issue tracker to request this feature, within the link provided, you can check the features that were requested or request a feature that satisfies your expectations.
Upvotes: 1