Reputation: 6678
I just found my box has 5% for HDD hard drive left and I have like almost 250GB of mysql bin file that I want to send to s3. We have moved from mysql to NoSQL and not currently using mysql. However I would love to preserve old data before migration.
Problem is I can't just tar the files in a loop before sending them there. So I was thinking I could gzip on the fly before sending so it doesn't store the compressed file on HDD.
for i in * ; do cat i | gzip -9c | s3cmd put - s3://mybudcket/mybackups/$i.gz; done
To test this command, I run it without the loop and it didn't send anything but didn't complain about anything either. Is there anyway of achieving that?
OS is ubuntu 12.04 s3cmd version is 1.0.0
Thank you for your suggestions.
Upvotes: 1
Views: 3029
Reputation: 1428
Alternatively you can use https://github.com/minio/mc . Minio Client aka mc
is written in Golang, released under Apache License Version 2.
It implements mc pipe
command for users to stream data directly to Amazon S3. mc pipe
can also pipe to multiple destinations in parallel. Internally mc pig streams the output and does multipart upload in parallel.
$ mc pipe
NAME:
mc pipe - Write contents of stdin to files. Pipe is the opposite of cat command.
USAGE:
mc pipe TARGET [TARGET...]
#!/bin/bash
for i in *; do
mc cat $i | gzip -9c | mc pipe https://s3.amazonaws.com/mybudcket/mybackups/$i.gz
done
If you can see mc
also implements mc cat
command :-).
Upvotes: 3
Reputation: 21
The function to allow stdin to S3 was added to Master branch in February 2014, so I guess make sure your version is newer than that? Version 1.0.0 is from 2011 and previous, the current (at time of this writing) is 1.5.2. It's likely you need to update your version of s3cmd
Other than that, according to https://github.com/s3tools/s3cmd/issues/270 this should work, save that your "do cat i" is missing the $ sign to indicate it as a variable.
Upvotes: 2