Joseph Lam
Joseph Lam

Reputation: 6089

Amazon S3 - How to fix 'The request signature we calculated does not match the signature' error?

I have searched on the web for over two days now, and probably have looked through most of the online documented scenarios and workarounds, but nothing worked for me so far.

I am on AWS SDK for PHP V2.8.7 running on PHP 5.3.

I am trying to connect to my Amazon S3 bucket with the following code:

// Create a `Aws` object using a configuration file
$aws = Aws::factory('config.php');

// Get the client from the service locator by namespace
$s3Client = $aws->get('s3');

$bucket = "xxx";
$keyname = "xxx";

try {
    $result = $s3Client->putObject(array(
        'Bucket' => $bucket,
        'Key' => $keyname,
        'Body' => 'Hello World!'
    ));

    $file_error = false;
} catch (Exception $e) {
    $file_error = true;

    echo $e->getMessage();

    die();
}

My config.php file is as follows:

return [
    // Bootstrap the configuration file with AWS specific features
    'includes' => ['_aws'],
    'services' => [
        // All AWS clients extend from 'default_settings'. Here we are
        // overriding 'default_settings' with our default credentials and
        // providing a default region setting.
        'default_settings' => [
            'params' => [
                'credentials' => [
                    'key'    => 'key',
                    'secret' => 'secret'
                ]
            ]
        ]
    ]
];

It is producing the following error:

The request signature we calculated does not match the signature you provided. Check your key and signing method.

I've already checked my access key and secret at least 20 times, generated new ones, used different methods to pass in the information (i.e. profile and including credentials in code) but nothing is working at the moment.

Upvotes: 322

Views: 659198

Answers (30)

koehn
koehn

Reputation: 804

For what this is worth, I had the correct keys (managed programmatically), but Minio didn’t like secret keys with symbols in them. Once I changed the password to alphanumeric (Minio lets you do this through the WebUI), it instantly worked.

Upvotes: 0

Adam Ostrožlík
Adam Ostrožlík

Reputation: 1416

I solved this by fixing contentType. I was sending "application" but it should be application/json

Upvotes: 0

nimblebit
nimblebit

Reputation: 559

I had this error message when uploading using the S3 .NET SDK in a AWS Lambda function.

I already checked the credentials and region configuration were correct. The timezone on my local machine was in sync with S3.

The error was caused because the request bucketname incorrectly contained the subdirectory path. This was fixed by prefixing the subdirectory to the file name Key property instead.

  • BucketName = "description"
  • Key = "subdirectory/filename.extension"

Upvotes: 0

saurabh
saurabh

Reputation: 681

Try using

aws configure

This command (Getting started with the AWS CLI) will open a set of options asking for keys, region and output format.

Upvotes: 8

Ross Coundon
Ross Coundon

Reputation: 917

I've just encountered this because I was using an HTTP POST request instead of PUT.

Upvotes: 43

Joseph Lam
Joseph Lam

Reputation: 6089

The key I was assigning to the object started with a period i.e. ..\images\ABC.jpg, and this caused the error to occur.

Upvotes: 200

Mr Chaudhary
Mr Chaudhary

Reputation: 1

I was facing the same issue with Cloudfront backend with s3, and my solution was not sending the "Host" header in the cloudfront origin policy.

Upvotes: 0

hzitoun
hzitoun

Reputation: 5832

As per java docs of files uploading to S3 bucket:

If you are uploading Amazon Web Services KMS-encrypted objects, you need to specify the correct region of the bucket on your client and configure Amazon Web Services Signature Version 4 for added security. For more information on how to do this, see http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html# specify-signature-version

So you may need to configure the signature version 4.

Upvotes: 0

Srinivas
Srinivas

Reputation: 64

These changes worked for me. Modified the code

FROM:

const s3 = new AWS.S3();

TO:

const s3 = new AWS.S3({
  apiVersion: '2006-03-01',
  signatureVersion: 'v4',
});

Changed the method call from POST to PUT.

Upvotes: 0

Mickael
Mickael

Reputation: 879

I had the same error [1] when I was trying to get a file from S3 using Ansible. My error was to use the presigned URL returned by aws_s3 when I put the file to S3 in order to get the file later in my Ansible role.

- name: Upload CVE report to S3
  amazon.aws.aws_s3:
    profile: "{{ wazuh_cve_report_aws_boto_profile }}"
    bucket: "{{ wazuh_cve_report_aws_s3_bucket }}"
    object: "{{ wazuh_cve_report_aws_s3_object_prefix }}"
    src: "{{ wazuh_cve_report_generated_reports_dir }}/vulnerabilities.csv"
    region: "{{ wazuh_cve_report_aws_region }}"
    mode: put
    overwrite: different
    encrypt: true
  register: wazuh_cve_report_s3_object_register

- name: Debug S3 object
  ansible.builtin.debug:
    msg: "{{ wazuh_cve_report_s3_object_register.url }}"

Trying to get the file using the URL in wazuh_cve_report_s3_object_register.url results in SignatureDoesNotMatch error code.

To remediate to this problem I had to use another task with mode geturl to get the correct presigned URL to download the file that I had just uploaded.

- name: Upload CVE report to S3
  amazon.aws.aws_s3:
    profile: "{{ wazuh_cve_report_aws_boto_profile }}"
    bucket: "{{ wazuh_cve_report_aws_s3_bucket }}"
    object: "{{ wazuh_cve_report_aws_s3_object_prefix }}"
    src: "{{ wazuh_cve_report_generated_reports_dir }}/vulnerabilities.csv"
    region: "{{ wazuh_cve_report_aws_region }}"
    mode: put
    overwrite: different
    encrypt: true

- name: Get CVE report presigned URL for Downloading from S3
  amazon.aws.aws_s3:
    profile: "{{ wazuh_cve_report_aws_boto_profile }}"
    bucket: "{{ wazuh_cve_report_aws_s3_bucket }}"
    object: "{{ wazuh_cve_report_aws_s3_object_prefix }}"
    src: "{{ wazuh_cve_report_generated_reports_dir }}/vulnerabilities.csv"
    region: "{{ wazuh_cve_report_aws_region }}"
    # 7 days
    expiry: 604800
    mode: geturl
  register: wazuh_cve_report_s3_object_register

- name: Debug S3 object
  ansible.builtin.debug:
    msg: "{{ wazuh_cve_report_s3_object_register.url }}"

We can't GET a file with presigned URL that was signed for PUT method.

When you create a presigned URL, you must provide your security credentials, and then specify the following:

  • An Amazon S3 bucket
  • An object key (if downloading this object will be in your Amazon S3 bucket, if uploading this is the file name to be uploaded)
  • An HTTP method (GET for downloading objects or PUT for uploading)
  • An expiration time interval [2]

[1] The request signature we calculated does not match the signature you provided. Check your key and signing method.

[2] https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html

Upvotes: -1

Barungi Stephen
Barungi Stephen

Reputation: 847

My problem was Using wrong access key with the right secret key.

When i resolved that and everything works

Upvotes: -1

Obada Jaras
Obada Jaras

Reputation: 71

For me, I'm using React and Axios to send the API request.

it was withCredentials: true:

const instance = axios.create({
    withCredentials: true,
});

Just removed the withCredentials: true and it works:

const instance = axios.create();

withCredentials: true in Axios enables sending cookies and authorization headers with cross-origin requests.

Upvotes: 0

nascente_diskreta
nascente_diskreta

Reputation: 79

I'm working with Go (GoLang, AWS SDK v2), and the problem was that if you want to set an expiration date for your presigned request, you must set it in the optFns ...func(*s3.PresignOptions) argument, i.e., the third and optional argument to PresignPutObject.

I had this:

const validity = time.Second * 60 * 5
expires := time.Now().Add(validity)

request, err := c.client.PresignPutObject(context.TODO(), &s3.PutObjectInput{
    Bucket:      aws.String(appconfig.Get().S3Bucket),
    Key:         aws.String(key),
    Expires:     &expires,
})

But this is what you actually need:

const validity = time.Second * 60 * 5

request, err := c.client.PresignPutObject(context.TODO(), &s3.PutObjectInput{
    Bucket:      aws.String(appconfig.Get().S3Bucket),
    Key:         aws.String(key),
}, func(opts *s3.PresignOptions) { opts.Expires = validity })

Upvotes: 0

IceCode
IceCode

Reputation: 1751

I have two web applications, one current and one that is in development. The S3 upload is working fine on the current application, but not in the one that is in development. On closer inspection I noticed that the S3 SDK versions were different as the application that is in development has the latest version of the S3 SDK and the current application has an older S3 SDK version. By downgrading the new application to the same S3 SDK version as the current application, the S3 upload worked in the new application. So clearly and not surprisingly there is a difference between the versions in the SDK on how the buckets/folder paths are handled amongst other things.

Upvotes: 0

Talha Awan
Talha Awan

Reputation: 4619

In my case, I was using "aws-sdk" (version 2) for s3 functionality in my Node js application. Switching to @aws-sdk/client-s3 (version > 3), resolved this issue.

Upvotes: 0

You must check all the metadata values you are sending to s3 must be of type string, as aws s3 doesn't support non-string values.

Upvotes: 0

InTech97
InTech97

Reputation: 31

I encountered the same error message when using the Amazon SES SDK to instantiate an AmazonSimpleEmailServiceClient object and subsequently GetSendStatistics.

I was using my administrative level IAM users credentials to connect ... which failed with the familiar error: "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."

I resolved this by creating an Access Key under the My Security Credentials for my IAM user. When I used the credentials from the new access key, my connection to Amazon SES via the SDK worked.

Upvotes: 1

Rutul Patel
Rutul Patel

Reputation: 41

In my case, the issue is that we are using the wrong Bucket Name. AWS S3 buckets have specific naming conventions that we need to follow. You can find the naming convention rules in the link below:

Bucket Naming Rules

For example:

Bucket Name: g7asset-shwe (throwing error)

Bucket Name: g7asset (working properly)

Additionally, it's important to note that S3 does not actually have a "folder" structure. Each object in a bucket has a unique key, and the object is accessed through that key.

While some S3 utilities, including the AWS console, simulate a "folder" structure, it's not directly related to how S3 functions. In other words, you don't need to worry about it. Simply create the object with a forward slash (/) in its key, and everything will work as expected.

Upvotes: -1

Fesch
Fesch

Reputation: 313

Also stuck on this for hours... turns out seems to be an issue/ bug on the AWS side as per this Github issue/ comment. The suggested solution is to specify the AWS endpoint directly

boto3.client(
  's3',
  endpoint_url=f'https://s3.{region}.amazonaws.com',
  config=boto3.session.Config(s3={'addressing_style': 'virtual'})
)

Upvotes: 0

Abdalrahman Shatou
Abdalrahman Shatou

Reputation: 4748

I have spent 8 hours trying to fix this issue. For me, everything mentioned in all answers were fine. The keys were correct and tested through CLI. I was using SDK V3 which is the latest and doesn't need the signature version. It finally turned out to be passing a wrong object in the Body! (not a text nor a array buffer). Yes, it's one of the most stupid error messages that I have ever seen in my 16 years career. AWS sometimes drives me crazy.

Upvotes: 3

Jalal Sordo
Jalal Sordo

Reputation: 1665

It May not be 100% the answer to the OP, but some people might find this useful, In my case, it was one of those times when the IDE autocomplete the code and you don't check afterwards :

My bean had

new BasicAWSCredentials(storageProperties.getAccessKey(), storageProperties.getAccessKey())))

So basically two getAccessKey() instead of getSecret(), so it should be:

new BasicAWSCredentials(storageProperties.getAccessKey(), storageProperties.getSecret())))

Upvotes: -1

Lacrosse343
Lacrosse343

Reputation: 841

I was getting this same error while downloading a S3 file during a CloudFormation::Init procedure. The issue was that the folder name in S3 had a space in it. I moved the files to a new folder without a space, instead an underscore, and that fixed the issue.

Upvotes: 0

Tom Williams
Tom Williams

Reputation: 11

I am using the java SDK and got the same error. For me it was because I was sending special characters in the request. The characters I was sending were the Korean letters of a file name. And the specific location was:

com.amazonaws.services.s3.model.PutObjectRequest request.metadata.userMetadata

I realised that I didn't really need to send this information, so removing it fixed my error.

Upvotes: 0

JozefS
JozefS

Reputation: 448

In my case it was missing CORS configuration for the bucket. This helped:

[{
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["GET","HEAD","POST","PUT"],
    "AllowedOrigins": ["*"],
    "ExposeHeaders": []
}]

Upvotes: 0

jcm
jcm

Reputation: 33

I was getting this error in our shared environment where the SDK was being used, but using the same key/secret and the aws cli, it worked fine. The build system script had a space after the key and secret and session keys, which the code read in as well. So the fix for me was to adjust the build script to remove the spaces after the variables being used.

Just adding this for anyone who might miss that frustrating invisible space at the end of their creds.

Upvotes: 1

user1506104
user1506104

Reputation: 7076

If you are a Android developer and is using the signature function from AWS sample code, you are most likely wondering why the ListS3Object works but not the GetS3Object. This is because when you set the setDoOutput(true) and using GET HTTP method, Android's HttpURLConnection switches the request to a POST. Thus, invalidating your signature. Check my original post of the issue.

Upvotes: 1

Dawid Kisielewski
Dawid Kisielewski

Reputation: 829

For me I used axios and by deafult it sends header

content-type: application/x-www-form-urlencoded

so i change to send:

content-type: application/octet-stream

and also had to add this Content-Type to AWS signature

const params = {
    Bucket: bucket,
    Key: key,
    Expires: expires,
    ContentType: 'application/octet-stream'
}

const s3 = new AWS.S3()
s3.getSignedUrl('putObject', params)

Upvotes: 8

Alex Samson
Alex Samson

Reputation: 145

I had the same issue in c#. In turn out that the issue was coming from the way restsharp return the body when you try to access it directly. In our case, it was with the /feeds/2021-06-30/documents endpoint with this body:

{
    "contentType":"text/xml; charset=UTF-8"
}

The issue was when trying to sign the request on the AWSSignerHelper class on the HashRequestBody method you have the following code:

 public virtual string HashRequestBody(IRestRequest request)
    {
        Parameter body = request.Parameters.FirstOrDefault(parameter => ParameterType.RequestBody.Equals(parameter.Type));
        string value = body != null ? body.Value.ToString() : string.Empty;
        return Utils.ToHex(Utils.Hash(value));
    }

At this point the value of body.Value.ToString() will be:

{contentType:text/xml; charset=UTF-8}

It is missing the double quotes which restsharp add when it post the request however when you access the value like that it doesn't which give an invalid hash because the value isn't the same as the one sended.

I replaced the code with that for the moment and it work:

public virtual string HashRequestBody(IRestRequest request)
    {
        Parameter body = request.Parameters.FirstOrDefault(parameter => ParameterType.RequestBody.Equals(parameter.Type));
        string value = body != null ? body.Value.ToString() : string.Empty;
        if (body?.ContentType == "application/json")
        {
            value = Newtonsoft.Json.JsonConvert.SerializeObject(body.Value);
        }
        return Utils.ToHex(Utils.Hash(value));
    }

Upvotes: 0

flyingfishcattle
flyingfishcattle

Reputation: 2133

This issue happened to me because I was accidentally assigning the value of the ACCESS_KEY_ID to SECRET_ACCESS_KEY_ID. Once this was fixed everything worked fine.

Upvotes: 4

user2814916
user2814916

Reputation: 83

In my case incorrect order of the api call parameters caused this.

For example when I called /api/call1?parameter1=x&parameter2=y I received the following message:

"The signature of the request did not match calculated signature."

Upon swapping the parameters: /api/call1?parameter2=y&parameter1=x, the api call worked as expected.

Very frustrating as the api documentation itself had the parameters in a different order. This also wasn't the only call this happened for.

Upvotes: -2

Related Questions