Piyush dhore
Piyush dhore

Reputation: 731

AWS CLI listing S3 buckets gives SignatureDoesNotMatch error using IAM user credentials

I am using AWS CLI on Ubuntu 16.04 LTS, I am trying to list all buckets. In aws configure I have input the IAM user access key and IAM user secret key. This IAM user has permissions to list buckets and can list buckets in the console. But using AWS CLI with these keys and running the command aws s3 ls, it is giving me this error:

A client error (SignatureDoesNotMatch) occurred when calling the ListBuckets operation: The request signature we calculated does not match the signature you provided. Check your key and signing method.

I have created a policy to list buckets for this particular IAM user also.

I want to perform further sync operations and make all files public operations via a shell script using this IAM user credentials and do not want to use root credentials.

Upvotes: 43

Views: 131092

Answers (27)

Rahul Bharadwaj
Rahul Bharadwaj

Reputation: 2784

In my case, I saw this error when trying to perform the s3:UploadPart operation.

We have a custom AWS SDK which was working fine for every other AWS module except for the single s3:UploadPart operation.

Turns out that our SDK would strip the - from the end of the uploadId while constructing the URL for s3:UploadPart.

Even though the uploadId was incorrect, AWS doesn't tell that the uploadId is invalid, instead it just throws the generic SignatureDoesNotMatch error.

Upvotes: 0

Sina
Sina

Reputation: 369

Mine came down to time issue like Aditya suggested above but I'm running OSX on an EC2 instance, not Ubuntu. Running date was showing the time was like ~30 seconds behind.

I ran sudo sntp -sS time.apple.com to sync the time and that resolved the problems.

Upvotes: 0

Sivashankar
Sivashankar

Reputation: 564

If you're using old access key, check whether it has cli access, if no modify it or create new access key with aws cli access.

Upvotes: 0

Lion
Lion

Reputation: 31

My credential have both "+/" and I recreate one with only "+" works. https://stackoverflow.com/a/74102924/19478714

Upvotes: 3

bvdb
bvdb

Reputation: 24770

This is an issue that has been around at least since 2014.

  • It has be reproduced on a wide range of aws versions and python versions.
  • It has been reproduced on a wide range of operating systems.

Even though many people assume that it is linked to the use of / and + characters, it seems unlikely given that

  • Some keys with both / and + characters work on one system but not on another.
  • Some systems that don't support a specific key with / and + characters do support freshly generated keys that also contain both / and + characters.

So, there certainly has to be more to it.

Moving back and forth between a working and non-working key consistently breaks and repairs it. Meaning that it's not just some unrelated issue that gets fixed while fiddling around with the CLI tool.

Generating a new key has always instantly fixed the problem for me. So, that's definitely the way to get around it.

Upvotes: 2

El Ally
El Ally

Reputation: 11

I also had this error from The CircleCi Graphic User interface.

It can be caused by the difference of AWS environment from in CircleCI with the environment you have in your terminal when your terminal is the trigger of the .yml file to github till CircleCi.

If you are on the track of auto deployment with a connection of Github and CircleCi with AWS check the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY if are the same as the ones you entered in your local machine.

The command: AWS configure, will help you to know at least the 4 last characters of your keys and then you can see if there are the ones you have in AWS and CircleCi.

Upvotes: 1

Eunit
Eunit

Reputation: 146

I solved this issue by entering the following commands in the terminal:

  • aws configure set aws_secret_key "": This command resets the AWS Secret key
  • aws configure: This command restarts the AWS configuration. Follow the prompts and input the details gotten from the AWS user as seen on the screenshot attached.

AWS user

Upvotes: 1

JimShapedCoding
JimShapedCoding

Reputation: 917

For Windows: I encountered this error since my tz was configured -3 hours than the correct tz.

If you double-checked that ~/.aws/credentials matches to what has been configured in the aws configure, sync the time zone this way:

Search for Change the Date and Time in Windows Start menu -> Turn on the Set time zone automatically -> Click on Sync Now

Upvotes: 1

Abdullah Khawer
Abdullah Khawer

Reputation: 5748

I ran the following command via pipeline in BitBucket:

aws s3 sync --delete ./public s3://abc123.net/ --exclude "*.css" --exclude "*.js" --exclude "*.json"  --exclude "*.html" --exclude "*.svg" --exclude "*.ttf" --exclude "*.eot" --exclude "*.png" --exclude  "*.jpg" --exclude "*.gif" --exclude "*.webp" --exclude "*.woff2"

I got the following error:

fatal error: An error occurred (SignatureDoesNotMatch) when calling the ListObjectsV2 operation: The request signature we calculated does not match the signature you provided. Check your key and signing method.

I fixed it by updating the AWS Access Key ID and AWS Secret Access Key in the deployment variables.

Upvotes: 1

aikind
aikind

Reputation: 129

I had the same error on Windows using PuTTY. The fix was to enclose the aws_secret_access_key in the ~/.aws/credentials file with double quotes ". I think it's because the / in the aws_secret_access_key causes some issues on Windows.

Upvotes: 1

Kuldip Mori
Kuldip Mori

Reputation: 441

if you are facing this Error also check the key are the same or different.

in a short time, the best solution is to recreate a new key and change in code with the OLD key working in my case.

After the changes for check use this command.

$ aws configure

-> access key

-> secret key

-> region

-> output formate

$ aws s3 ls

if you can get a list of buckets is available in s3 so the problem solved

Upvotes: 2

Zubin G
Zubin G

Reputation: 41

This worked for me (I was trying to download a large file):

aws configure set default.s3.multipart_threshold 1000MB

Upvotes: 4

Gowtham
Gowtham

Reputation: 614

My case is different, yesterday these configured keys in my new Mac is working fine but today is not working. I tried comparing with the working old windows system configuration and both looks same. I couldn't understand, I copy pasted from windows to mac again then it's working fine, seems to be some invisible characters might have added.

Upvotes: 1

sajy2k
sajy2k

Reputation: 21

in my case, i have some typo on the region name. once fixed it, all works fine.

Set the Region name to None first.. then change it back correctly.

Should be good.

Upvotes: 1

carrotcakeslayer
carrotcakeslayer

Reputation: 1008

Using single quotes (') instead of using qutoes (") when exporting the AWS_SECRET_ACCESS_KEY, solved this problem for me using "aws s3 cp" commands.

Upvotes: 1

WoodedView
WoodedView

Reputation: 11

Newbie to Boto3,Python and AWS automation here.

I got the error

"A client error (SignatureDoesNotMatch) occurred when calling the CreateBucket operation: The request signature we calculated does not match the signature you provided. Check your key and signing method." when attempting to programmatically add an S3 bucket to my AWS account.

I use Jupyter as my IDE and spent a lot of time attempting to fix this issue. What I found is that this is related to the default region that is entered into the "config" file in .aws, for some reason the default region was "us-west-2" in my config file and the bucket I was attempting to add was in us-east-2.

I have seen some solutions attempt to correct this with environmental variables but I believe this solution is much simpler.

Upvotes: 1

Suresh
Suresh

Reputation: 835

Fixed it by using --endpoint-url option as stated here: https://github.com/aws/aws-cli/issues/4922

Looks like it is related to the fact that the VM I was on was in a different region than the bucket.

Upvotes: 3

yoges nsamy
yoges nsamy

Reputation: 1435

In my case, this was due to incorrect aws_secret_access_key.

To check, open the file ~/.aws/credentials by typing:

cat ~/.aws/credentials

The content should be something like below:

[default]
aws_access_key_id = xxx
aws_secret_access_key = xx

See if the aws_access_key_id & aws_secret_access_key matches your credentials. If it doesn't, edit and save changes.

p/s: If you don't remember your aws_secret_access_key, generate a new key and secret by going to aws console --> your name --> My Security Credentials.

enter image description here

Then click 'Create access key':

enter image description here

Take note that you can only have two access keys at a time.

Upvotes: 21

Gagan Mani
Gagan Mani

Reputation: 161

It means that your AWS security credentials got expired. Simply creating new credentials will work.

  • Go to your AWS account -> My security credentials
  • click on Create New Access Key. Make a note of access key id and secret access key
  • Run aws configure and enter new credentials

Upvotes: 6

Chris
Chris

Reputation: 11

For me the problem was the / in my secret_key to get round it pasted the secret key into a .dat file on my pc copied it to the server pg the .dat file on the server and copied the secret key when using aws configure

Upvotes: 1

user11589664
user11589664

Reputation: 39

Please switch to root user.

In my case I was accessing this command aws s3 ls from standard user and its giving below error:

"AWS CLI listing S3 buckets gives SignatureDoesNotMatch error using IAM user credentials"

then switched to root user by using sudo su command and tried accessing aws s3 ls command it listed s3 bucket names.

Upvotes: 2

Aditya
Aditya

Reputation: 111

Can happen even when the machine time is not in sync with the NT server.

sudo ntpdate ntp.ubuntu.com helped me solve this problem.

Upvotes: 11

Rishabh Sanghvi
Rishabh Sanghvi

Reputation: 51

The issue was with the AWS credentials, i copied the secret from excel file into txt file and some how few of the special character stripped away. Make sure to copy it properly.

Also try restarting the machine and make sure AWS is not set in environment variable, you can check it with printenv | grep 'AWS'

Upvotes: 3

schmudu
schmudu

Reputation: 2243

Found my issue. I had old AWS keys in my environment variables. If you have environment variables named

AWS_SECRET_ACCESS_KEY
AWS_ACCESS_KEY_ID

the awscli will use those values instead of what is provided via ~/.aws/credentials.

Try running printenv | grep AWS and verify that those values aren't set. If so then just run a

unset AWS_SECRET_ACCESS_KEY
unset AWS_ACCESS_KEY_ID

and you should be good to go.

Upvotes: 30

gamechanger17
gamechanger17

Reputation: 4609

Just check the time on the system on which you are running. make sure it is updated

Upvotes: 2

mowwwalker
mowwwalker

Reputation: 17382

In my case, I had encryption but was sending the the size the file was before being encrypted. If you got this error and your secret and key are correct, it's worth double checking your md5, mimetype, size, and other attributes

Upvotes: 2

Ananth
Ananth

Reputation: 797

This error is because of incorrect aws s3 access key/secret key.

Upvotes: 5

Related Questions