Reputation:
I have my $AWS_ACCESS_KEY_ID and $AWS_SECRET_ACCESS_KEY environment variables set properly, and I run this code:
import boto
conn = boto.connect_s3()
and get this error:
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler']
What's happening? I don't know where to start debugging.
It seems boto isn't taking the values from my environment variables. If I pass in the key id and secret key as arguments to the connection constructor, this works fine.
Upvotes: 53
Views: 53665
Reputation: 2077
I had previously used s3-parallel-put
successfully but it inexplicably stopped working, giving the error above. This despite having exported the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
The solution was to specify the the credentials in the boto config file:
$ nano ~/.boto
Enter the credentials like so:
[Credentials]
aws_access_key_id = KEY_ID
aws_secret_access_key = SECRET_ACCESS_KEY
Upvotes: 1
Reputation: 1436
I was having this issue with a flask application on ec2. I didn't want to put credentials in the application, but managed permission via IAM roles. That way can avoid hard-coding keys into code. Then I set a policy in the AWS console (I didn't even code it, I just used the policy generator)
My code is exactly like OP's. The other solutions here are good but there is a way to grand permission without hard-coding access keys.
boto.connect_s3()
#no keys neededUpvotes: 0
Reputation: 543
Following up on nealmcb's answer on IAM roles. Whilst deploying EMR clusters using an IAM role, I had a similar issue where at times (not every time) this error would come up whilst connecting boto to s3.
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler']
The Metadata Service can timeout whilst retrieving credentials. Thus, as the docs suggest, I added a Boto section in the config and increased the number of retries to retrieve the credentials. Note that the default is 1 attempt.
import boto, ConfigParser
try:
boto.config.add_section("Boto")
except ConfigParser.DuplicateSectionError:
pass
boto.config.set("Boto", "metadata_service_num_attempts", "20")
http://boto.readthedocs.org/en/latest/boto_config_tut.html?highlight=retries#boto
Scroll down to: You can control the timeouts and number of retries used when retrieving information from the Metadata Service (this is used for retrieving credentials for IAM roles on EC2 instances)
Upvotes: 10
Reputation: 26517
See latest boto s3 introduction:
from boto.s3.connection import S3Connection
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
Upvotes: 3
Reputation: 1220
I just ran into this problem while using Linux and SES, and I hope it may help others with a similar issue. I had installed awscli and configured my keys doing:
sudo apt-get install awscli
aws configure
This is used to setup your credentials in ~/.aws/config just like @huythang said. But boto looks for your credentials in ~/.aws/credentials so copy them over
cp ~/.aws/config ~/.aws/credentials
Assuming an appropriate policy is setup for your user with those credentials - you shouldn't need to set any environment variables.
Upvotes: 10
Reputation: 323
I found my answer here.
On Unix: first setup aws config:
#vim ~/.aws/config
[default]
region = Tokyo
aws_access_key_id = xxxxxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxx
And set environment variables
export AWS_ACCESS_KEY_ID="aws_access_key_id"
export AWS_SECRET_ACCESS_KEY="aws_secret_access_key"
Upvotes: 4
Reputation: 8680
On Mac, exporting keys need to look like this: key=value
. So exporting, say, AWS_ACCESS_KEY_ID
environmental var should look like this: AWS_ACCESS_KEY_ID=yourkey
. If you have any quotations around your values, as mentioned in above answers, boto will throw the above-mentioned error.
Upvotes: 0
Reputation: 13491
In my case the problem was that in IAM "users by default have no permissions". It took me all day to track that down, since I was used to the original AWS authentication model (pre-iam) in which what are now called "root" credentials were the only way.
There are lots of AWS documents on creating users, but only a few places where they note that you have to give them permissions for them to do anything. One is Working with Amazon S3 Buckets - Amazon Simple Storage Service, but even it doesn't really just tell you to go to the Policies tab, suggest a good starting policy, and explain how to apply it.
The wizard-of-sorts simply encourages you to "Get started with IAM users" and doesn't clarify that there is much more to do. Even if you poke around a bit, you just see e.g. "Managed Policies There are no managed policies attached to this user." which doesn't suggest that you need a policy to do anything.
To establish a root-like user, see: Creating an Administrators Group Using the Console - AWS Identity and Access Management
I don't see a specific policy which simply simply allows read-only access to all of S3 (my own buckets as well as public ones owned by others).
Upvotes: 2
Reputation: 1859
You can now set these as arguments in the connect function call.
s3 = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
Just thought I'd add that incase anyone else searched like I did.
Upvotes: 1
Reputation: 251
I'm a newbie to both python and boto but was able to reproduce your error (or at least the last line of your error.)
You are most likely failing to export your variables in bash. if you just define then, they're only valid in the current shell, export them and python inherits the value. Thus:
$ AWS_ACCESS_KEY_ID="SDFGRVWGFVVDWSFGWERGBSDER"
will not work unless you also add:
$ export AWS_ACCESS_KEY_ID
Or you can do it all on the same line:
$ export AWS_ACCESS_KEY_ID="SDFGRVWGFVVDWSFGWERGBSDER"
Likewise for the other value. You can also put this in your .bashrc (assuming bash is your shell and assuming you remember to export)
Upvotes: 15
Reputation: 301
I see you call them AWS_ACCESS_KEY_ID
& AWS_SECRET_ACCESS_KEY
.
When it seems they should be set as AWSAccessKeyId
& AWSSecretKey
.
Upvotes: -4
Reputation: 1149
Boto will take your credentials from the environment variables. I've tested this with V2.0b3 and it works fine. It will give precedence to credentials specified explicitly in the constructor, but it will pick up credentials from the environment variables too.
The simplest way to do this is to put your credentials into a text file, and specify the location of that file in the environment.
For example (on Windows: I expect it will work just the same on Linux but I have not personally tried that)
Create a file called "mycred.txt" and put it into C:\temp This file contains two lines:
AWSAccessKeyId=<your access id>
AWSSecretKey=<your secret key>
Define the environment variable AWS_CREDENTIAL_FILE to point at C:\temp\mycred.txt
C:\>SET AWS_CREDENTIAL_FILE=C:\temp\mycred.txt
Now your code fragment above:
import boto
conn = boto.connect_s3()
will work fine.
Upvotes: 41