Reputation: 8451
This is failing when I run it inside a Docker container, but works fine when I run it within a virtualenv in OS X. Any idea what could be going wrong? Are there any known issues with Docker+boto?
>>> import boto3
>>> s3 = boto3.client('s3')
>>> s3.download_file("mybucket", "myfile.txt", "myfile2.txt")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python2.7/site-packages/boto3/s3/inject.py", line 104, in download_file
extra_args=ExtraArgs, callback=Callback)
File "/opt/conda/lib/python2.7/site-packages/boto3/s3/transfer.py", line 666, in download_file
object_size = self._object_size(bucket, key, extra_args)
File "/opt/conda/lib/python2.7/site-packages/boto3/s3/transfer.py", line 729, in _object_size
Bucket=bucket, Key=key, **extra_args)['ContentLength']
File "/opt/conda/lib/python2.7/site-packages/botocore/client.py", line 258, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/opt/conda/lib/python2.7/site-packages/botocore/client.py", line 548, in _make_api_call
raise ClientError(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
Upvotes: 7
Views: 11041
Reputation: 11
Providing sessionToken does the job for me. Add ENV AWS_SESSION_TOKEN=**** in Dockerfile along with accesskey and secret_access_key
Upvotes: 0
Reputation: 29
I had the same issue here with a docker running on windows. The problem was, that the Hyper-V time is stopped, once your pc is going in stand-by mode. This was causing problems, because the timestamp from my request and from the AWS - bucket was different.
What solved the problem for me was to simply restart docker/ my computer...
Also keep in mind, to send your AWS - profile / credentials as environment variables. It seems, that this is a issue from docker:
Upvotes: 1
Reputation: 366
I just chased down a similar issue, and it came down to the fact that the system time was wrong on the machine that returned the 403's when attempting to communicate with S3. The incorrect system time meant that signatures for requests were being computed incorrectly. Setting the system time to the correct value solved the problem -- if you ensure that your docker container sets the system time (for example by using NTP), your problem might go away like mine did.
Upvotes: 1
Reputation: 55
What fixed the issue for me was to install awscli on my container and update boto3 > 1.4
pip install awscli
pip install --upgrade boto3
Upvotes: 1
Reputation: 52393
Look at the error: An error occurred (403) when calling the HeadObject operation: Forbidden
It found the credentials but it didn't have permission to access the bucket. Bottom line: Update your IAM privileges to include s3:ListBucket
permission for your bucket: arn:aws:s3:::mybucket/*
or just attach the policy AmazonS3ReadOnlyAccess
to your IAM user/role.
You can try this and see it prints the correct credentials:
>>> import botocore.session
>>> session = botocore.session.get_session()
>>> session.get_credentials().access_key
'AKIAABCDEF6RWSGI234Q'
>>> session.get_credentials().secret_key
'abcdefghijkl+123456789+qbcd'
Upvotes: 2
Reputation: 12135
I guess you are not setting the correct environment variables. Use env
to check what is set on your host and set similar variables within the container or pass them through with -e
to docker run
.
Edit: Since you specified in your comments that you are using a credentials file, pass that into the container with -v ~/.aws/credentials:/root/.aws/credentials
. This assumes, a proper HOME
is set and you are using the root
user. Some images have not done this, and you may need to put it into the root folder at /.aws/credentials
.
If you have a specific user, the path needs to be in his home folder.
Upvotes: 1