adele dazim
adele dazim

Reputation: 537

The difference between the request time and the current time is too large

I'm downloading and deleting files from an S3 bucket. The bucket has around 50000 files. After downloading and deleting every 10000 or odd files, I get this error.

  File "/Library/Python/2.7/site-packages/boto/s3/key.py", line 544, in delete
    headers=headers)
  File "/Library/Python/2.7/site-packages/boto/s3/bucket.py", line 760, in delete_key
    query_args_l=None)
  File "/Library/Python/2.7/site-packages/boto/s3/bucket.py", line 779, in _delete_key_internal
    response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>RequestTimeTooSkewed</Code><Message>The difference between the request time and the current time is too large.</Message><RequestTime>Sun, 07 Dec 2014 20:09:54 GMT</RequestTime><ServerTime>2014-12-07T21:09:56Z</ServerTime><MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds>

I already have NTP running on the Mac, which syncs with time.apple.com. What else can I do to make this go away? Is there a more efficient way of deleting a large number of files/keys in an S3 bucket, other than doing a clear? The relevant code block:

                try:
                    f.get_contents_to_filename(fname)
                    if (choice=="y"): f.delete()
                except OSError,e:
                    print e

Upvotes: 0

Views: 4164

Answers (2)

Marian
Marian

Reputation: 15338

We are seeing this error occasionally, although the machines executing the code are definitely in sync with NTP and use the UTC time zone. The offset we see reported in the error message then can be 30 minutes.

See here for some observation and a suggestion how to fix this in Python using a timeout decorator:

http://www.saltycrane.com/blog/2010/04/using-python-timeout-decorator-uploading-s3/

In the comments, there is also another suggestion to fix this using shorter socket timeouts.

My guess about the root cause: Every now and then, a request hangs in a queue on the Amazon side and is then handled with a huge delay. Some security check kicks in and prevents the fulfillment and responds with an error instead. So it's probably impossible to really "fix" on our side.

Upvotes: 1

petertc
petertc

Reputation: 3911

Just guess, maybe boto cache the time, and oscillator in your mac is inaccurate in the same time.

Try "Delete Multiple Objects" API operation provided by AWS S3, it is more efficient way to delete many objects. In boto, it is "boto.s3.bucket.Bucket.delete_keys".

Upvotes: 1

Related Questions