Reputation: 151
I'm trying to make a python script that allows me to create / delete/list vaults, execute jobs, list / describe vault jobs, upload/download archives, and make and get inventories.
I'm using awscli
and boto3
python3 packages
I'm facing something quiet weird, when uploading a file with command line as it follows :
aws glacier upload-archive --vault-name <vault_name> --archive-description <archive_name> --body <file_2_upload> --account-id -
Where vault_name
, archive_name
, file_2_upload
are variables passed when executing the script and account-id
is set up before script execution with aws configure
command
It takes a while in function of the file size, but it works as expected
When trying with my script (this is the part that is in charge of backups)
glacier = boto3.client('glacier')
upload = glacier.upload_archive(vaultName=vault, body=archive, archiveDescription=name)
The answer is almost immediate, and the output shows something like this
HTTP Code : 201
==> which means operation successful
However my vault size doesn't increase, and I cannot find the file in it
What am I missing?
Thanks in advance
Upvotes: 1
Views: 956
Reputation: 151
Ok, I've finally found the reason why it was not working, I needed to charge the file / filepart in memory as it follows :
with open(archive, 'rb') as upload:
archive_upload = upload.read(size)
glacier_upload = glacier.upload_archive(vaultName=vault, archiveDescription=name, body=archive_upload)
I hope that it can helps anyone facing the same issue
Cheers
Upvotes: 1