Reputation: 11
Am somewhat new to using Amazon S3 for image storage and just wondering the easiest way to import about about 5,000 (currently publicly available) Image URLs into S3 so that the end result is hosting of the images on our S3 Account (and resulting New URLs of the same images).
Am wondering if there is a need to first download and save each image on my computer, and then to import the resulting Images to S3 (which would of course result in new URLs for those images)? Or is there an easier to accomplish this.
Thanks for any suggestions.
Upvotes: 1
Views: 243
Reputation: 7834
The AWS CLI for S3 doesn’t support copying a file from a remote URL to S3, but you can copy from a local filestream to S3.
Happily, cURL
outputs the contents of URLs as streams by default, so the following command will stream a remote URL to a location in S3 via the local machine:
# The "-" means "stream from stdin".
curl https://example.com/some/file.ext | aws s3 cp - s3://bucket/path/file.ext
Caveat: while this doesn’t require a local temporary file, it does ship all of the file bytes through the machine where the command is running as a relay to S3.
Add whatever other options for the S3 cp command that you need, such as --acl
.
Threaded multipart transfers with chunk management will probably be more performant but this is a quick solution in a pinch if you have access to the CLI.
Upvotes: 1
Reputation: 1376
S3 is a “dumb” object storage, meaning that it cannot do any data processing itself, including but not limited to downloading files. So yes, you’ll have to dowload your objects first and then upload them to S3.
Upvotes: 0