Reputation: 731
I am using AWS CLI on Ubuntu 16.04 LTS, I am trying to list all buckets. In aws configure I have input the IAM user access key and IAM user secret key. This IAM user has permissions to list buckets and can list buckets in the console. But using AWS CLI with these keys and running the command aws s3 ls
, it is giving me this error:
A client error (SignatureDoesNotMatch) occurred when calling the ListBuckets operation: The request signature we calculated does not match the signature you provided. Check your key and signing method.
I have created a policy to list buckets for this particular IAM user also.
I want to perform further sync operations and make all files public operations via a shell script using this IAM user credentials and do not want to use root credentials.
Upvotes: 43
Views: 131092
Reputation: 2784
In my case, I saw this error when trying to perform the s3:UploadPart
operation.
We have a custom AWS SDK which was working fine for every other AWS module except for the single s3:UploadPart
operation.
Turns out that our SDK would strip the -
from the end of the uploadId
while constructing the URL for s3:UploadPart
.
Even though the uploadId
was incorrect, AWS doesn't tell that the uploadId
is invalid, instead it just throws the generic SignatureDoesNotMatch
error.
Upvotes: 0
Reputation: 369
Mine came down to time issue like Aditya suggested above but I'm running OSX on an EC2 instance, not Ubuntu. Running date
was showing the time was like ~30 seconds behind.
I ran sudo sntp -sS time.apple.com
to sync the time and that resolved the problems.
Upvotes: 0
Reputation: 564
If you're using old access key, check whether it has cli access, if no modify it or create new access key with aws cli access.
Upvotes: 0
Reputation: 31
My credential have both "+/" and I recreate one with only "+" works. https://stackoverflow.com/a/74102924/19478714
Upvotes: 3
Reputation: 24770
This is an issue that has been around at least since 2014.
Even though many people assume that it is linked to the use of /
and +
characters, it seems unlikely given that
/
and +
characters work on one system but not on another./
and +
characters do support freshly generated keys that also contain both /
and +
characters.So, there certainly has to be more to it.
Moving back and forth between a working and non-working key consistently breaks and repairs it. Meaning that it's not just some unrelated issue that gets fixed while fiddling around with the CLI tool.
Generating a new key has always instantly fixed the problem for me. So, that's definitely the way to get around it.
Upvotes: 2
Reputation: 11
I also had this error from The CircleCi Graphic User interface.
It can be caused by the difference of AWS environment from in CircleCI with the environment you have in your terminal when your terminal is the trigger of the .yml file to github till CircleCi.
If you are on the track of auto deployment with a connection of Github and CircleCi with AWS check the variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY if are the same as the ones you entered in your local machine.
The command: AWS configure, will help you to know at least the 4 last characters of your keys and then you can see if there are the ones you have in AWS and CircleCi.
Upvotes: 1
Reputation: 146
I solved this issue by entering the following commands in the terminal:
aws configure set aws_secret_key ""
: This command resets the AWS Secret keyaws configure
: This command restarts the AWS configuration. Follow the prompts and input the details gotten from the AWS user as seen on the screenshot attached.Upvotes: 1
Reputation: 917
For Windows: I encountered this error since my tz was configured -3 hours than the correct tz.
If you double-checked that ~/.aws/credentials matches to what has been configured in the aws configure
, sync the time zone this way:
Search for Change the Date and Time
in Windows Start menu -> Turn on the Set time zone automatically
-> Click on Sync Now
Upvotes: 1
Reputation: 5748
I ran the following command via pipeline in BitBucket:
aws s3 sync --delete ./public s3://abc123.net/ --exclude "*.css" --exclude "*.js" --exclude "*.json" --exclude "*.html" --exclude "*.svg" --exclude "*.ttf" --exclude "*.eot" --exclude "*.png" --exclude "*.jpg" --exclude "*.gif" --exclude "*.webp" --exclude "*.woff2"
I got the following error:
fatal error: An error occurred (SignatureDoesNotMatch) when calling the ListObjectsV2 operation: The request signature we calculated does not match the signature you provided. Check your key and signing method.
I fixed it by updating the AWS Access Key ID and AWS Secret Access Key in the deployment variables.
Upvotes: 1
Reputation: 129
I had the same error on Windows using PuTTY. The fix was to enclose the aws_secret_access_key
in the ~/.aws/credentials
file with double quotes "
. I think it's because the /
in the aws_secret_access_key
causes some issues on Windows.
Upvotes: 1
Reputation: 441
if you are facing this Error also check the key are the same or different.
in a short time, the best solution is to recreate a new key and change in code with the OLD key working in my case.
After the changes for check use this command.
$ aws configure
-> access key
-> secret key
-> region
-> output formate
$ aws s3 ls
if you can get a list of buckets is available in s3 so the problem solved
Upvotes: 2
Reputation: 41
This worked for me (I was trying to download a large file):
aws configure set default.s3.multipart_threshold 1000MB
Upvotes: 4
Reputation: 614
My case is different, yesterday these configured keys in my new Mac is working fine but today is not working. I tried comparing with the working old windows system configuration and both looks same. I couldn't understand, I copy pasted from windows to mac again then it's working fine, seems to be some invisible characters might have added.
Upvotes: 1
Reputation: 21
in my case, i have some typo on the region name. once fixed it, all works fine.
Set the Region name to None first.. then change it back correctly.
Should be good.
Upvotes: 1
Reputation: 1008
Using single quotes (') instead of using qutoes (") when exporting the AWS_SECRET_ACCESS_KEY, solved this problem for me using "aws s3 cp" commands.
Upvotes: 1
Reputation: 11
Newbie to Boto3,Python and AWS automation here.
I got the error
"A client error (SignatureDoesNotMatch) occurred when calling the CreateBucket operation: The request signature we calculated does not match the signature you provided. Check your key and signing method." when attempting to programmatically add an S3 bucket to my AWS account.
I use Jupyter as my IDE and spent a lot of time attempting to fix this issue. What I found is that this is related to the default region that is entered into the "config" file in .aws, for some reason the default region was "us-west-2" in my config file and the bucket I was attempting to add was in us-east-2.
I have seen some solutions attempt to correct this with environmental variables but I believe this solution is much simpler.
Upvotes: 1
Reputation: 835
Fixed it by using --endpoint-url
option as stated here: https://github.com/aws/aws-cli/issues/4922
Looks like it is related to the fact that the VM I was on was in a different region than the bucket.
Upvotes: 3
Reputation: 1435
In my case, this was due to incorrect aws_secret_access_key
.
To check, open the file ~/.aws/credentials by typing:
cat ~/.aws/credentials
The content should be something like below:
[default]
aws_access_key_id = xxx
aws_secret_access_key = xx
See if the aws_access_key_id
& aws_secret_access_key
matches your credentials. If it doesn't, edit and save changes.
p/s: If you don't remember your aws_secret_access_key
, generate a new key and secret by going to aws console --> your name --> My Security Credentials.
Then click 'Create access key':
Take note that you can only have two access keys at a time.
Upvotes: 21
Reputation: 161
It means that your AWS security credentials got expired. Simply creating new credentials will work.
Create New Access Key
. Make a note of access key id and secret access keyaws configure
and enter new credentialsUpvotes: 6
Reputation: 11
For me the problem was the / in my secret_key to get round it pasted the secret key into a .dat file on my pc copied it to the server pg the .dat file on the server and copied the secret key when using aws configure
Upvotes: 1
Reputation: 39
Please switch to root user.
In my case I was accessing this command aws s3 ls
from standard user and its giving below error:
"AWS CLI listing S3 buckets gives SignatureDoesNotMatch error using IAM user credentials"
then switched to root user by using sudo su
command and tried accessing aws s3 ls
command it listed s3 bucket names.
Upvotes: 2
Reputation: 111
Can happen even when the machine time is not in sync with the NT server.
sudo ntpdate ntp.ubuntu.com
helped me solve this problem.
Upvotes: 11
Reputation: 51
The issue was with the AWS credentials, i copied the secret from excel file into txt file and some how few of the special character stripped away. Make sure to copy it properly.
Also try restarting the machine and make sure AWS is not set in environment variable, you can check it with printenv | grep 'AWS'
Upvotes: 3
Reputation: 2243
Found my issue. I had old AWS keys in my environment variables. If you have environment variables named
AWS_SECRET_ACCESS_KEY
AWS_ACCESS_KEY_ID
the awscli will use those values instead of what is provided via ~/.aws/credentials
.
Try running printenv | grep AWS
and verify that those values aren't set. If so then just run a
unset AWS_SECRET_ACCESS_KEY
unset AWS_ACCESS_KEY_ID
and you should be good to go.
Upvotes: 30
Reputation: 4609
Just check the time on the system on which you are running. make sure it is updated
Upvotes: 2
Reputation: 17382
In my case, I had encryption but was sending the the size the file was before being encrypted. If you got this error and your secret and key are correct, it's worth double checking your md5, mimetype, size, and other attributes
Upvotes: 2