Reputation: 8098
I am wondering if there is a common pattern for retrying requests a certain number of times (which might be failing because of server error, or bad network). I came up with this and I'm willing to find better implementations out there.
cnt=0
while cnt < 3:
try:
response = requests.get(uri)
if response.status_code == requests.codes.ok:
return json.loads(response.text)
except requests.exceptions.RequestException:
continue
cnt += 1
return False
Upvotes: 0
Views: 3522
Reputation: 3952
You might want to consider introducing a wait between retries as a lot of transient problem might take more than a few seconds to clear. In addition, I would recommend a geometrical increase in wait time to give enough time for system to recover:
import time
cnt=0
max_retry=3
while cnt < max_retry:
try:
response = requests.get(uri)
if response.status_code == requests.codes.ok:
return json.loads(response.text)
else:
# Raise a custom exception
except requests.exceptions.RequestException as e:
time.sleep(2**cnt)
cnt += 1
if cnt >= max_retry:
raise e
In this case, your retries will happen after 1, 2 and 4 seconds. Just watch out for max number of retries. You increase the retry to 10 and next thing you know the code would be waiting for 17 mins.
Edit:
Taking a closer look at your code, it doesn't really make sense to return false
when you exhausted retried. You should really be raising the exception to the caller so that problem can be communicated. Also, you check for requests.codes.ok
but no else
action define.
Upvotes: 5