Reputation: 1032
I want to download file from a s3 and save it into the tmp folder of my lambda. But I accountered a stange and unexpected error that makes me crazy: OverflowError: timestamp too large to convert to C _PyTime_t
Here is my code, did I miss something?
s3 = boto3.resource('s3')
bucket_name = 'my_bucket'
keys = ['private.pkcs8', 'public.pubkey']
for key in keys:
try:
local_file_name = 'tmp/' + key
s3.Bucket(bucket_name).download_file(key, local_file_name)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "404":
continue
else:
raise
I have the same error when retrieving s3 as Client and not Resource. With different log, I could check the error comes with the "download_file" function.
Please, help me!!!
EDIT: the full stack trace of the error
14:04:36 [ERROR] 2020-02-18T14:04:36.104Z f8f5aef0-18a3-41d0-951f-4d979fb24aa4 timestamp too large to convert to C _PyTime_t 14:04:36 Traceback (most recent call last): 14:04:36 File "/opt/python/boto3/s3/inject.py", line 172, in download_file 14:04:36 extra_args=ExtraArgs, callback=Callback) 14:04:36 File "/opt/python/boto3/s3/transfer.py", line 307, in download_file 14:04:36 future.result() 14:04:36 File "/opt/python/s3transfer/futures.py", line 106, in result 14:04:36 return self._coordinator.result() 14:04:36 File "/opt/python/s3transfer/futures.py", line 265, in result 14:04:36 raise self._exception 14:04:36 File "/opt/python/s3transfer/tasks.py", line 255, in _main 14:04:36 self._submit(transfer_future=transfer_future, **kwargs) 14:04:36 File "/opt/python/s3transfer/download.py", line 343, in _submit 14:04:36 **transfer_future.meta.call_args.extra_args 14:04:36 File "/opt/python/botocore/client.py", line 276, in _api_call 14:04:36 return self._make_api_call(operation_name, kwargs) 14:04:36 File "/opt/python/botocore/client.py", line 586, in _make_api_call 14:04:36 raise error_class(parsed_response, operation_name) 14:04:36 botocore.exceptions.ClientError: An error occurred (400) when calling the HeadObject operation: Bad Request 14:04:36 During handling of the above exception, another exception occurred: 14:04:36 Traceback (most recent call last): 14:04:36 File "/var/task/lb_get_tlb_connection_token_rds.py", line 181, in lambda_handler 14:04:36 bucket.download_file(key, local_file_name, ExtraArgs={'VersionId': 'foo'}) 14:04:36 File "/opt/python/boto3/s3/inject.py", line 246, in bucket_download_file 14:04:36 ExtraArgs=ExtraArgs, Callback=Callback, Config=Config) 14:04:36 File "/opt/python/boto3/s3/inject.py", line 172, in download_file 14:04:36 extra_args=ExtraArgs, callback=Callback) 14:04:36 File "/opt/python/boto3/s3/transfer.py", line 325, in exit 14:04:36 self._manager.exit(*args) 14:04:36 File "/opt/python/s3transfer/manager.py", line 539, in exit 14:04:36 self._shutdown(cancel, cancel_msg, cancel_exc_type) 14:04:36 File "/opt/python/s3transfer/manager.py", line 578, in _shutdown 14:04:36 self._submission_executor.shutdown() 14:04:36 File "/opt/python/s3transfer/futures.py", line 474, in shutdown 14:04:36 self._executor.shutdown(wait) 14:04:36 File "/opt/python/concurrent/futures/thread.py", line 169, in shutdown 14:04:36 t.join(sys.maxsize) 14:04:36 File "/var/lang/lib/python3.7/threading.py", line 1048, in join 14:04:36 self._wait_for_tstate_lock(timeout=max(timeout, 0)) 14:04:36 File "/var/lang/lib/python3.7/threading.py", line 1060, in _wait_for_tstate_lock 14:04:36 elif lock.acquire(block, timeout): 14:04:36 OverflowError: timestamp too large to convert to C _PyTime_t
Upvotes: 1
Views: 382
Reputation: 3397
This looks like runtime problem, probably related to this bug: https://bugs.python.org/issue25155
Try changing your lambda runtime to the latest supported (Python 3.8 as of time of writing) and see if the problem persists.
Upvotes: 0