Reputation: 51
I have backup files in different directories in one drive. Files in those directories can be quite big up to 800GB or so. So I have a batch file with a set of scripts which upload/syncs files to S3.
See example below:
aws s3 sync R:\DB_Backups3\System s3://usa-daily/System/ --exclude "*" --include "*/*/Diff/*"
The upload time can vary but so far so good.
My question is, how do I edit the script or create a new one which checks in the s3 bucket that the files have been uploaded and ONLY if they have been uploaded then deleted them from the local drive, if not leave them on the drive?
(Ideally it would check each file)
I'm not familiar with aws s3, or aws cli command that can do that? Please let me know if I made myself clear or if you need more details.
Any help will be very appreciated.
Upvotes: 3
Views: 4410
Reputation: 202642
As the answer by @ketan shows, Amazon aws
client cannot do batch move.
You can use WinSCP put -delete
command instead:
winscp.com /log=S3.log /ini=nul /command ^
"open s3://S3KEY:[email protected]/" ^
"put -delete C:\local\path\* /bucket/" ^
"exit"
You need to URL-encode special characters in the credentials. WinSCP GUI can generate an S3 script template, like the one above, for you.
Alternatively, since WinSCP 5.19, you can use -username
and -password
switches, which do not need any encoding:
"open s3://s3.amazonaws.com/ -username=S3KEY -password=S3SECRET" ^
(I'm the author of WinSCP)
Upvotes: 1
Reputation: 51
Best would be to use mv
with --recursive
parameter for multiple files
When passed with the parameter --recursive
, the following mv
command recursively moves all files under a specified directory to a specified bucket and prefix while excluding some files by using an --exclude
parameter. In this example, the directory myDir has the files test1.txt and test2.jpg:
aws s3 mv myDir s3://mybucket/ --recursive --exclude "*.jpg"
Output:
move: myDir/test1.txt to s3://mybucket2/test1.txt
Hope this helps.
Upvotes: 5