Koder
Koder

Reputation: 1914

Limit Size Of Objects While Uploading To Amazon S3 Using Pre-Signed URL

I know of limiting the upload size of an object using this method: http://doc.s3.amazonaws.com/proposals/post.html#Limiting_Uploaded_Content

But i would like to know how it can be done while generating a pre-signed url using S3 SDK on the server side as an IAM user.

This Url from SDK has no such option in its parameters : http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

Neither in this: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getSignedUrl-property

Please note: I already know of this answer: AWS S3 Pre-signed URL content-length and it is NOT what i am looking for.

Upvotes: 61

Views: 46105

Answers (5)

Antonio
Antonio

Reputation: 1290

Straight to the point: on April 2022 AWS stated on the relevant feature request

Quick update: this feature is in our backlog, but we don't have a timeline for it yet.
Add a 👍 to the original description if you want to show your support, it helps us with prioritization.

More details: S3 JDK doesn't allow restricting the size of the file to upload via a presigned url, at the very least because it doesn't allow POST requests, and maybe the people at S3 haven't realistic plans to do this.

This is my take:

  • You (and all of us) would solve the problem by restricting the permission of a given presigned url using a POST policy. As the name suggests, it relates to your client sending a policy via a POST request. Where is the deal? Differently from a PUT, with an http POST you can attach more structured info to a http request. Hence on a plain POST request to Amazon, you could also send along a policy, by means of attaching the proper fields and values to the POST's http form. When would you attach a POST policy? When you ask AWS for something, for instance, to ask for the blessed presigned url, specifying also a contextual policy along the way.
  • However, while you can generate uploadUrls via the sdk, by sending an http PUT to S3, you cannot use a POST for this. This is the first deal breaker. Indeed one uses something like S3Presigner.presignPutObject(PutObjectRequest). There is currently no support for PostObjectRequest or something similar. I am speaking of JDK v2. There is that long standing feature request for this, but it doesn't seem to get much priority.
  • With JDK v1 there was a GeneratePresignedUrlRequest which could send POST methods, via GeneratePresignedUrlRequest(bucketName, key).request.setMethod(HttpMethod.POST);. Yet, even juggling with the old sdk, I couldn't find a way to attach the fields to the request, to define a POST policy. This confuses the matters.
  • To confuse a bit more, it seems that languages other than java have more luck, like javascript

So it seems we are almost there, but never there. I added my thumb up 👍 to the feature request of course, and you might want to do the same if you read up to this point.

Upvotes: 0

Oscar Chen
Oscar Chen

Reputation: 636

You can specify the min and max sizes in bytes using a condition called content-length-range:

{
  "expiration": "2022-02-14T13:08:46.864Z",
  "conditions": [
    { "acl": "bucket-owner-full-control" },
    { "bucket": "my-bucket" },
    ["starts-with", "$key", "stuff/clientId"],
    ["content-length-range", 1048576, 10485760]
  ]
}

Upvotes: 2

Janac Meena
Janac Meena

Reputation: 3577

For any other wanderers that end up on this thread - if you set the Content-Length attribute when sending the request from your client, there a few possibilities:

  1. The Content-Length is calculated automatically, and S3 will store up to 5GB per file

  2. The Content-Length is manually set by your client, which means one of these three scenarios will occur:

  • The Content-Length matches your actual file size and S3 stores it.
  • The Content-Length is less than your actual file size, so S3 will truncate your file to fit it.
  • The Content-Length is larger than your actual file size, and you will receive a 400 Bad Request

In any case, a malicious user can override your client and manually send a HTTP request with whatever headers they want, including a much larger Content-Length than you may be expecting. Signed URLs do not protect against this! The only way is to setup an POST policy. Official docs here: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html

More details here: https://janac.medium.com/sending-files-directly-from-client-to-amazon-s3-signed-urls-4bf2cb81ddc3?postPublishedType=initial

Alternatively, you can have a Lambda that automatically deletes files that are larger than expected.

Upvotes: 8

adamkonrad
adamkonrad

Reputation: 7122

You may not be able to limit content upload size ex-ante, especially considering POST and Multi-Part uploads. You could use AWS Lambda to create an ex-post solution. You can setup a Lambda function to receive notifications from the S3 bucket, have the function check the object size and have the function delete the object or do some other action.

Here's some documentation on Handling Amazon S3 Events Using the AWS Lambda.

Upvotes: 7

user1055568
user1055568

Reputation: 1429

The V4 signing protocol offers the option to include arbitrary headers in the signature. See: http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html So, if you know the exact Content-Length in advance, you can include that in the signed URL. Based on some experiments with CURL, S3 will truncate the file if you send more than specified in the Content-Length header. Here is an example V4 signature with multiple headers in the signature http://docs.aws.amazon.com/general/latest/gr/sigv4-add-signature-to-request.html

Upvotes: 32

Related Questions