Alexey
Alexey

Reputation: 2386

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256

I get an error AWS::S3::Errors::InvalidRequest The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. when I try upload file to S3 bucket in new Frankfurt region. All works properly with US Standard region.

Script:

backup_file = '/media/db-backup_for_dev/2014-10-23_02-00-07/slave_dump.sql.gz'
s3 = AWS::S3.new(
    access_key_id:     AMAZONS3['access_key_id'],
    secret_access_key: AMAZONS3['secret_access_key']
)

s3_bucket = s3.buckets['test-frankfurt']

# Folder and file name
s3_name = "database-backups-last20days/#{File.basename(File.dirname(backup_file))}_#{File.basename(backup_file)}"

file_obj = s3_bucket.objects[s3_name]
file_obj.write(file: backup_file)

aws-sdk (1.56.0)

How to fix it?

Thank you.

Upvotes: 177

Views: 189704

Answers (26)

JTX
JTX

Reputation: 343

In Django, if you are using s3 for your static files,

AWS_S3_SIGNATURE_VERSION = "s3v4"
AWS_S3_REGION_NAME = "your-bucket-region-name-here"

Upvotes: 0

Harat
Harat

Reputation: 1366

Nodejs

var aws = require("aws-sdk");

aws.config.update({
    region: process.env.AWS_REGION,
    secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
    accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
});
var s3 = new aws.S3({
    signatureVersion: "v4",
});
let data = await s3.getSignedUrl("putObject", {
        ContentType: mimeType, //image mime type from request
        Bucket: "MybucketName",
        Key: folder_name + "/" + uuidv4() + "." + mime.extension(mimeType),
        Expires: 300,
    });
  console.log(data);

AWS S3 Bucket Permission Configuration

Deselect Block All Public Access

Add Below Policy
{
  "Version":"2012-10-17",
  "Statement":[{
  "Sid":"PublicReadGetObject",
    "Effect":"Allow",
  "Principal": "*",
  "Action":["s3:GetObject"],
  "Resource":["arn:aws:s3:::MybucketName/*"
  ]
  }
]
}

Then Paste the returned URL and make PUT request on the URL with binary file of image

Upvotes: 0

Aslam Shekh
Aslam Shekh

Reputation: 706

Using PHP SDK Follow Below.

require 'vendor/autoload.php';

use Aws\S3\S3Client;

use Aws\S3\Exception\S3Exception;


$client = S3Client::factory(
    array(
        'signature' => 'v4',
        'region' => 'me-south-1',
        'key'    => YOUR_AWS_KEY,
        'secret' => YOUR_AWS_SECRET
    )
);

Upvotes: 0

JH_web_dev
JH_web_dev

Reputation: 36

Also coming from: https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html

For me this was the solution:

AWS_S3_REGION_NAME = "eu-central-1"
AWS_S3_ADDRESSING_STYLE = 'virtual'

This needs to be added to settings.py in your Django project

Upvotes: 0

Leigh Mathieson
Leigh Mathieson

Reputation: 2018

Full working nodejs version:

const AWS = require('aws-sdk');

var s3 = new AWS.S3( {
    endpoint: 's3.eu-west-2.amazonaws.com',
    signatureVersion: 'v4',
    region: 'eu-west-2'
} );


const getPreSignedUrl = async () => {
    const params = {
        Bucket: 'some-bucket-name/some-folder',
        Key: 'some-filename.json',
        Expires: 60 * 60 * 24 * 7
    };
    try {
        const presignedUrl = await new Promise((resolve, reject) => {
            s3.getSignedUrl('getObject', params, (err, url) => {
                err ? reject(err) : resolve(url);
            });
        });
        console.log(presignedUrl);
    } catch (err) {
        if (err) {
            console.log(err);
        }
    }
};

getPreSignedUrl();

Upvotes: -1

CodeMask
CodeMask

Reputation: 121

Here is the function I used with Python

def uploadFileToS3(filePath, s3FileName):
    s3 = boto3.client('s3', 
                    endpoint_url=settings.BUCKET_ENDPOINT_URL,
                    aws_access_key_id=settings.BUCKET_ACCESS_KEY_ID,
                    aws_secret_access_key=settings.BUCKET_SECRET_KEY,
                    region_name=settings.BUCKET_REGION_NAME
                    )
    try:
        s3.upload_file(
            filePath, 
            settings.BUCKET_NAME, 
            s3FileName
            )

        # remove file from local to free up space
        os.remove(filePath)

        return True
    except Exception as e:
        logger.error('uploadFileToS3@Error')
        logger.error(e)
        return False

Upvotes: 1

Harshit Gangwar
Harshit Gangwar

Reputation: 553

I was stuck for 3 days and finally, after reading a ton of blogs and answers I was able to configure Amazon AWS S3 Bucket.

On the AWS Side

I am assuming you have already

  1. Created an s3-bucket
  2. Created a user in IAM

Steps

  1. Configure CORS settings

    you bucket > permissions > CORS configuration

    <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
    </CORSConfiguration>```
    
    
  2. Generate A bucket policy

your bucket > permissions > bucket policy

It should be similar to this one

 {
     "Version": "2012-10-17",
     "Id": "Policy1602480700663",
     "Statement": [
         {
             "Sid": "Stmt1602480694902",
             "Effect": "Allow",
             "Principal": "*",
             "Action": "s3:GetObject",
             "Resource": "arn:aws:s3:::harshit-portfolio-bucket/*"
         }
     ]
 }
PS: Bucket policy should say `public` after this 
  1. Configure Access Control List

your bucket > permissions > acces control list

give public access

PS: Access Control List should say public after this

  1. Unblock public Access

your bucket > permissions > Block Public Access

Edit and turn all options Off

**On a side note if you are working on django add the following lines to you settings.py file of your project **

#S3 BUCKETS CONFIG

AWS_ACCESS_KEY_ID = '****not to be shared*****'
AWS_SECRET_ACCESS_KEY = '*****not to be shared******'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'

AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = None
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'

# look for files first in aws 
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'

# In India these settings work
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"

Upvotes: 0

Rezan Moh
Rezan Moh

Reputation: 365

Supernova answer for django/boto3/django-storages worked with me:

AWS_S3_REGION_NAME = "ap-south-1"

Or previous to boto3 version 1.4.4:

AWS_S3_REGION_NAME = "ap-south-1"

AWS_S3_SIGNATURE_VERSION = "s3v4"

just add them to your settings.py and change region code accordingly

you can check aws regions from: enter link description here

Upvotes: 3

Manikandan Selvanathan
Manikandan Selvanathan

Reputation: 915

In my case, the request type was wrong. I was using GET(dumb) It must be PUT.

Upvotes: 1

P.Gupta
P.Gupta

Reputation: 575

Code for Flask (boto3)

Don't forget to import Config. Also If you have your own config class, then change its name.

from botocore.client import Config

s3 = boto3.client('s3',config=Config(signature_version='s3v4'),region_name=app.config["AWS_REGION"],aws_access_key_id=app.config['AWS_ACCESS_KEY'], aws_secret_access_key=app.config['AWS_SECRET_KEY'])
s3.upload_fileobj(file,app.config["AWS_BUCKET_NAME"],file.filename)
url = s3.generate_presigned_url('get_object', Params = {'Bucket':app.config["AWS_BUCKET_NAME"] , 'Key': file.filename}, ExpiresIn = 10000)

Upvotes: 8

Ankit Kumar Rajpoot
Ankit Kumar Rajpoot

Reputation: 5600

Try this combination.

const s3 = new AWS.S3({
  endpoint: 's3-ap-south-1.amazonaws.com',       // Bucket region
  accessKeyId: 'A-----------------U',
  secretAccessKey: 'k------ja----------------soGp',
  Bucket: 'bucket_name',
  useAccelerateEndpoint: true,
  signatureVersion: 'v4',
  region: 'ap-south-1'             // Bucket region
});

Upvotes: 0

Smartybrainy
Smartybrainy

Reputation: 101

AWS_S3_REGION_NAME = "ap-south-1"

AWS_S3_SIGNATURE_VERSION = "s3v4"

this also saved my time after surfing for 24Hours..

Upvotes: 9

SuperNova
SuperNova

Reputation: 27466

I have been using Django, and I had to add these extra config variables to make this work. (in addition to settings mentioned in https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html).

AWS_S3_REGION_NAME = "ap-south-1"

Or previous to boto3 version 1.4.4:

AWS_S3_REGION_NAME = "ap-south-1"

AWS_S3_SIGNATURE_VERSION = "s3v4"

Upvotes: 36

Pushplata
Pushplata

Reputation: 582

For Boto3 , use this code.

import boto3
from botocore.client import Config


s3 = boto3.resource('s3',
        aws_access_key_id='xxxxxx',
        aws_secret_access_key='xxxxxx',
        region_name='us-south-1',
        config=Config(signature_version='s3v4')
        )

Upvotes: 1

Ravi Oza
Ravi Oza

Reputation: 125

Check your AWS S3 Bucket Region and Pass proper Region in Connection Request.

In My Senario I have set 'APSouth1' for Asia Pacific (Mumbai)

using (var client = new AmazonS3Client(awsAccessKeyId, awsSecretAccessKey, RegionEndpoint.APSouth1))
{
    GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
    {
        BucketName = bucketName,
        Key = keyName,
        Expires = DateTime.Now.AddMinutes(50),
    };
    urlString = client.GetPreSignedURL(request1);
}

Upvotes: 1

LePirlouit
LePirlouit

Reputation: 499

With boto3, this is the code :

s3_client = boto3.resource('s3', region_name='eu-central-1')

or

s3_client = boto3.client('s3', region_name='eu-central-1')

Upvotes: 3

Salahudin Malik
Salahudin Malik

Reputation: 396

Basically the error was because I was using old version of aws-sdk and I updated the version so this error occured.

in my case with node js i was using signatureVersion in parmas object like this :

const AWS_S3 = new AWS.S3({
  params: {
    Bucket: process.env.AWS_S3_BUCKET,
    signatureVersion: 'v4',
    region: process.env.AWS_S3_REGION
  }
});

Then I put signature out of params object and worked like charm :

const AWS_S3 = new AWS.S3({
  params: {
    Bucket: process.env.AWS_S3_BUCKET,
    region: process.env.AWS_S3_REGION
  },
  signatureVersion: 'v4'
});

Upvotes: 1

gokul krishna
gokul krishna

Reputation: 21

Sometime the default version will not update. Add this command

AWS_S3_SIGNATURE_VERSION = "s3v4"

in settings.py

Upvotes: 0

Pascal
Pascal

Reputation: 296

Similar issue with the PHP SDK, this works:

$s3Client = S3Client::factory(array('key'=>YOUR_AWS_KEY, 'secret'=>YOUR_AWS_SECRET, 'signature' => 'v4', 'region'=>'eu-central-1'));

The important bit is the signature and the region

Upvotes: 15

Penkey Suresh
Penkey Suresh

Reputation: 5974

For people using boto3 (Python SDK) use the below code

from botocore.client import Config


s3 = boto3.resource(
    's3',
    aws_access_key_id='xxxxxx',
    aws_secret_access_key='xxxxxx',
    config=Config(signature_version='s3v4')
)

Upvotes: 42

Ian Darke
Ian Darke

Reputation: 11

For Android SDK, setEndpoint solves the problem, although it's been deprecated.

CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
                context, "identityPoolId", Regions.US_EAST_1);
AmazonS3 s3 = new AmazonS3Client(credentialsProvider);
s3.setEndpoint("s3.us-east-2.amazonaws.com");

Upvotes: 1

higuita
higuita

Reputation: 2315

For thumbor-aws, that used boto config, i needed to put this to the $AWS_CONFIG_FILE

[default]
aws_access_key_id = (your ID)
aws_secret_access_key = (your secret key)
s3 =
    signature_version = s3

So anything that used boto directly without changes, this may be useful

Upvotes: 2

GameScripting
GameScripting

Reputation: 17012

In Java I had to set a property

System.setProperty(SDKGlobalConfiguration.ENFORCE_S3_SIGV4_SYSTEM_PROPERTY, "true")

and add the region to the s3Client instance.

s3Client.setRegion(Region.getRegion(Regions.EU_CENTRAL_1))

Upvotes: 3

Michael - sqlbot
Michael - sqlbot

Reputation: 179124

AWS4-HMAC-SHA256, also known as Signature Version 4, ("V4") is one of two authentication schemes supported by S3.

All regions support V4, but US-Standard¹, and many -- but not all -- other regions, also support the other, older scheme, Signature Version 2 ("V2").

According to http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html ... new S3 regions deployed after January, 2014 will only support V4.

Since Frankfurt was introduced late in 2014, it does not support V2, which is what this error suggests you are using.

http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html explains how to enable V4 in the various SDKs, assuming you are using an SDK that has that capability.

I would speculate that some older versions of the SDKs might not support this option, so if the above doesn't help, you may need a newer release of the SDK you are using.


¹US Standard is the former name for the S3 regional deployment that is based in the us-east-1 region. Since the time this answer was originally written, "Amazon S3 renamed the US Standard Region to the US East (N. Virginia) Region to be consistent with AWS regional naming conventions." For all practical purposes, it's only a change in naming.

Upvotes: 179

morris4
morris4

Reputation: 2007

With node, try

var s3 = new AWS.S3( {
    endpoint: 's3-eu-central-1.amazonaws.com',
    signatureVersion: 'v4',
    region: 'eu-central-1'
} );

Upvotes: 90

Denis Rizun
Denis Rizun

Reputation: 531

You should set signatureVersion: 'v4' in config to use new sign version:

AWS.config.update({
    signatureVersion: 'v4'
});

Works for JS sdk.

Upvotes: 43

Related Questions