Reputation: 2043
I've generated a presigned S3 POST URL. Using the return parameters, I then pass it into my code, but I keep getting this error Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
.
Whereas on Postman, I'm able to submit the form-data with one attached file.
On PostMan, I manually entered the parameters
The same parameters are then entered into my code.
Upvotes: 86
Views: 86817
Reputation: 16820
For security reasons, if you want to allow a download from specific websites, only whitelist those sites.
Example:
[
{
"AllowedHeaders": [],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"https://example.com",
"https://example2.com"
],
"ExposeHeaders": []
}
]
With this permission, both example.com and example2.com will be able to download the PreSigned URL files without encountering CORS errors.
Upvotes: 0
Reputation: 23
I recommend trying the HTTP request in multiple browsers. In Firefox I was getting CORS errors even though everything CORS-wise was set up correctly.
Then in Chrome I got a 400 bad request.
Based on the form example they have here: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html
My solution was simply appending the file as the LAST key value pair in the FormData object.
Upvotes: 1
Reputation: 1155
I encountered this error when I had a bucket with dots (.
) in name, like: cdn.dev.company.com
(which was used with Cloudflare (not AWS Cloudfront) as CDN for serving media files). Below is python code snippet which was used by backend to generate preseigned URLs (that were used by frontend to upload video files directly to S3 bucket). Check the comment next to "client" variable line. In that configuration, it worked well (you also need to add CORS policy in bucket details - which is already described in this thread).
import boto3
from botocore.client import Config
from django.conf import settings
session = boto3.Session(
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
region_name=settings.AWS_S3_REGION_NAME,
)
# you need to define endpoint_url and addressing_style as virtual in order
# to generate URLs that are friendly for web-browser and work well with CORS
client = session.client(
"s3",
endpoint_url=f"https://s3.{settings.AWS_S3_REGION_NAME}.amazonaws.com",
config=Config(s3={"addressing_style": "virtual"}),
)
bucket = settings.AWS_STORAGE_BUCKET_NAME
key = "file.png"
upload_id = "zxcv"
part_number = "abcd"
default_url_expiration = 1200
client.generate_presigned_url(
ClientMethod="upload_part",
Params={
"Bucket": bucket,
"Key": key,
"UploadId": upload_id,
"PartNumber": part_number,
},
ExpiresIn=default_url_expiration,
HttpMethod="PUT",
)
Upvotes: 0
Reputation: 1
in my case URl was written as https=/www.xxx-qa.com. I changed it to https://www.xxx-qa.com and issue was resolved.
Upvotes: 0
Reputation: 261
I used boto3 to add the cors policy, and this is what worked for me. Used the logic by @Pranav Joglekar
cors_configuration = {
'CORSRules': [{
'AllowedHeaders': ['*'],
'AllowedMethods': ['GET', 'PUT', 'POST'],
'AllowedOrigins': ['*'],
'ExposeHeaders': [],
'MaxAgeSeconds': 3000
}]
}
s3_client = get_s3_client()
s3_client.put_bucket_cors(Bucket='my_bucket_name',
CORSConfiguration=cors_configuration)
Upvotes: 2
Reputation: 755
Check the url encoding. I had a url encoded version of the pre-resigned URL and that failed until I decoded it.
Upvotes: 0
Reputation: 11
I was getting similar CORS errors even with things properly configured.
Thanks to this answer, I discovered my Lambda@Edge that presigns was using a region that wasn't the right one for this bucket. (which was on us-east-1 for some default stack reason).
So I had to be explicit about the region when generating the presignedPost
reference: https://stackoverflow.com/a/13703595/11832970
Upvotes: 1
Reputation: 720
We have to specify only the required HTTP method. we were using the POST method for Presigned URL so removed the "GET" and "PUT" methods from "AllowedMethods"
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
Upvotes: -1
Reputation: 645
On my case I fixed it by having allowedMethods, and origins in S3. The menu is under the Permissions tab
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
Upvotes: 25
Reputation: 167
My issue was I had a trailing slash (/) at the end of the domain in "AllowedOrigins". Once I removed the slash, requests worked.
Upvotes: 2
Reputation: 747
Unable to comment so adding this here. Contains Harvey's answer, but in the form of a text to make it easy to copy.
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
Upvotes: 55
Reputation: 20011
My issue was that for some reason - getSignedUrl
returned a url like so:
https://my-bucket.s3.us-west-2.amazonaws.com/bucket-folder/file.jpg
I've removed the region part - us-west-2
- and that fixed it 🤷🏼♂️
So instead it is now
Upvotes: 1
Reputation: 1890
For me,it was because my bucket name had a hyphen in it (e.g. my-bucket). The signed URL would replace the hyphen in the bucket name with an underscore and then sign it. So this meant two things:
I eventually had to rename my bucket to something without a hyphen (e.g. mybucket) and then it worked fine with the following configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Upvotes: 0
Reputation: 934
I encountered this issue as well. My CORs configuration on my bucket seemed correct yet my presigned URLs were hitting CORs problems. It turns out my AWS_REGION
for my presigner was not set to the aws region of the bucket. After setting AWS_REGION
to the correct region, it worked fine. I'm annoyed that the CORS issue was such a red herring to a simple problem and wasted several hours of my time.
Upvotes: 15
Reputation: 93173
You must edit the CORS Configuration to be public , something like:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Upvotes: 64
Reputation: 354
In my case I specifically needed to allow the PUT method in the S3 Bucket's CORS Configuration to use the presigned URL, not the GET method as in the accepted answer:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Upvotes: 4