Reputation: 351
I'm trying to impose a simple safeguard at the level of the S3 bucket as a whole, to where it will just reject attempted file uploads that exceed a specified limit (say, 100MB).
Can this be done via an S3 bucket policy?
Thanks
Upvotes: 0
Views: 47
Reputation: 11604
There is no option with S3 policy to achieve what you wish. However, You have 3 options:
Use Cloudfront+ Lambda@edge: Lambda@edge will reject requests before it reaches to S3
Use S3 multi-part uploads: You would need custom logic in your upload logic side: You need to call ListMultipartUploads, then for each multipart upload, call ListParts to get the list of parts for the upload and add up the sizes of all of the parts. That will just tell you what the size currently has been uploaded, not the final size.
Upvotes: 1
Reputation: 171
I don't think file uploads can be restricted based on filesize by simply using the S3 policies. (At least I couldn't find that).
But I also had similar requirements in one app. So, I'll explain what I did, maybe it helps you too.
We cannot define the S3 policies (I was using Minio, which is compatible with S3 APIs), but if you have some API server that calls these S3 APIs, what you can do is, verify and check filesize in the API end-point which calls this S3 APIs.
Otherwise, you can also achieve this in a reverse proxy like Nginx if you are self-hosting (or have control over the hosting) the S3 instance. I was self-hosting the Minio instance, and I was using Nginx as a reverse proxy, so I simply added the following in the Nginx configuration which resulted in any file upload bigger than the allowed filesize being rejected by Nginx directly.
// NGINX Config
// ============
server {
location / {
client_max_body_size 100M; // <-- Change this to control filesize
proxy_pass http://<MINIO_SERVER_HOST>:<MINIO_PORT>;
}
}
I hope it helps.
Upvotes: 0