Reputation: 1731
When running long processes I keep getting:
...
content_repl/replicate_content.py", line 667, in detach_and_delete
volumes = ec2_conn.get_all_volumes(volume_ids=volumes_ids)
File "/usr/lib/python2.7/site-packages/boto/ec2/connection.py", line 2158, in get_all_volumes
[('item', Volume)], verb='POST')
File "/usr/lib/python2.7/site-packages/boto/connection.py", line 1186, in get_list
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>RequestExpired</Code><Message>Request has expired.</Message></Error></Errors><RequestID>xxxxxxxx</RequestID></Response>
Note that many operations are performed along the script without problems. At the beginning of the script I have:
ec2_conn = boto.ec2.connect_to_region(AWS_REGION,
aws_access_key_id=assumedRoleObject.credentials.access_key,
aws_secret_access_key=assumedRoleObject.credentials.secret_key,
security_token=assumedRoleObject.credentials.session_token,proxy=PROXY,proxy_por=PROXY_PORT)
Reading Boto connect_xxx method and connection pools and Boto docs my understanding is that Boto should handle connections internally and get a new one when needed. Is that correct? Should I add a parameter for that? Implement some retry logic myself?
Btw, I'm using v2.40. The app is single threaded.
Upvotes: 2
Views: 972
Reputation: 1731
I've found that in the end the problem was just with the connection object. According to my testing the assumedRoleObject could be reused without problems. Just in case however, I've recreated that object too when I recreate the connection.
Connections are good for 1 hour, then they start failing.
Upvotes: 1