Reputation: 598
I have written an implementation for generating pre-signed URLS for a bucket on aws-s3. It works fine, for getting single files/objects.
How would I go about this for generating pre-signed URLS for entire directories? Lets put it this way, on my s3 bucket, there are multiple folders with their own small html5 applications. Each folder has their own set of html, css, js, as well as media files. I wouldn't be generating a pre-signed URL for single object, in this case.
If I give a pre-signed url for a single file, for example: an index.html for a folder, that file would also need to load css, js, and media files as well. Files we don't have a signed url for.
I'm just not too sure on how to go about implementing this.
Upvotes: 51
Views: 70678
Reputation: 954
This is absolutely possible and has been for years. You must use conditions
when generating a presigned URL, specifically starts-with
. See the official Amazon Documentation.
As an example, here is Python with Boto3 generating a presigned POST url:
response = s3.generate_presigned_post(
"BUCKET_NAME",
"uploads/${filename}",
Fields=None,
Conditions=[["starts-with", "$key", "uploads/"]],
ExpiresIn=(10 * 60),
)
Upvotes: 56
Reputation: 148
This would generate a pre-signed URL for the existing object on the S3. In your case you could list down all the objects (using the keyName) and call the above method recursively.
public String generatePreSignedUrl(String bucketName, String keyName){
Date expiration = new Date();
long mSec = 60000l; //Time in millis
expiration.setTime(mSec);
GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(bucketName, keyName);
generatePresignedUrlRequest.setMethod(HttpMethod.GET);
generatePresignedUrlRequest.setExpiration(expiration);
URL url = s3.generatePresignedUrl(generatePresignedUrlRequest);
return url.toString();
}
PS: The AWS SDK provides API to list down the objects under the bucket & you could pick the required objects.
Upvotes: 0
Reputation: 23
For anyone else looking at this issue in 2021 - the easiest way that I found to download an entire "folder" of files from an S3 bucket was to change how I uploaded a "folder". Instead of recursively uploading, I swapped to zip the objects into a single zip archive, then upload the zip archive. It doesn't directly answer the question but is a quick work-around. Here is the example python code that I used to get around this problem:
from shutil import make_archive
from boto3 import client
make_archive(name_of_zipped_folder, 'zip', path_to_outdir)
Then upload this zipped folder to s3
aws s3 cp name_of_zipped_folder s3://bucket_name/name_of_zipped_folder
Then use boto3 code to generate presigned urls for that zipped folder
client('s3', 'us-east-1').generate_presigned_url('get_object', Params={'Bucket': bucket_name,'Key': name_of_zipped_folder}, ExpiresIn=120)
Upvotes: -1
Reputation: 3599
No, they would need to provide an API to allow you to upload multiple files first. This is a limitation of the API, not pre-signing.
See Is it possible to perform a batch upload to amazon s3?.
Upvotes: 24
Reputation: 35099
First things first:
AWS S3 is a key value store, each object aaa/bbb/ccc/ddd/index.html
is just one name. There is no concept of "folders" ( even though you might have a false impression that they exists from the UI ).
In order to create a single presigned url for multiple "files" you have to do some preprocessing. Pull all necessary files locally, zip them and put zip archive on S3, then generate presigned URL of zip archive.
Upvotes: 2