Reputation: 7932
I have a 10G file .tar file on s3, I want to decompress that file and keep the unzipped files on s3.
Is there a simple command I can run against s3?
Or do I have to unzip the file myself locally, and upload the individual files back to s3 myself?
Thanks
Upvotes: 14
Views: 14917
Reputation: 492
You can do this from the Amazon CLI, or the new Amazon CloudShell, with a command like
aws s3 cp s3://bucket/data.tar.gz - | tar -xz --to-command='aws s3 cp - s3://bucket/$TAR_REALNAME'
Note all those dangling '-' chars are important for piping to stdout/stdin
Upvotes: 15
Reputation: 2167
You can, however, mount S3 bucket on EC2 as S3FS.
Here is the link with more detail on how to mount: https://cloudkul.com/blog/mounting-s3-bucket-linux-ec2-instance/
Once mounted you can read and write files to s3 just like you do in local disk.
Upvotes: 0
Reputation: 269480
There is no command to manipulate file contents on Amazon S3.
You will need to download the file, untar/unzip it, then upload the content to S3.
This will be done the most quickly from an Amazon EC2 instance in the same region as the bucket. You could potentially write an AWS Lambda function to do this too, but beware of the 500MB /tmp
disk space limit.
Upvotes: 3