RandomCat
RandomCat

Reputation: 187

except ConnectionError or TimeoutError not working

In case of a connection error, I want Python to wait and re-try. Here's the relevant code, where "link" is some link:

import requests
import urllib.request
import urllib.parse

from random import randint

try:
    r=requests.get(link)

except ConnectionError or TimeoutError:
    print("Will retry again in a little bit")
    time.sleep(randint(2500,3000))
    r=requests.get(link)

Except I still periodically get a connection error. And I never see the text "Will retry again in a little bit" so I know the code is not re-trying. What am I doing wrong? I'm pasting parts of the error code below in case I'm misreading the error. TIA!

TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond

During handling of the above exception, another exception occurred:

requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', TimeoutError(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', None, 10060, None))

During handling of the above exception, another exception occurred:

requests.exceptions.ConnectionError: ('Connection aborted.', TimeoutError(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', None, 10060, None))

Upvotes: 1

Views: 5914

Answers (4)

B. Robinson
B. Robinson

Reputation: 116

I had the same problem. It turns out that urlib3 relies on socket.py, which raises an OSError. So, you need to catch that:

try:
    r = requests.get(link)
except OSError as e:
    print("There as an error: {}".format(e))

Upvotes: 0

Paul
Paul

Reputation: 312

For me, using a custom user agent in the request fixes this issue. With this method you spoof your browser.

Works:

url = "https://www.nasdaq.com/market-activity/stocks/amd"
headers = {'User-Agent': 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.6) Gecko/20070802 SeaMonkey/1.1.4'}
response = requests.get(url, headers=headers)

Doesn't work:

url = "https://www.nasdaq.com/market-activity/stocks/amd"
response = requests.get(url)

Upvotes: 1

t.m.adam
t.m.adam

Reputation: 15376

The second request is not inside a try block so exceptions are not caught. Also in the try-except block you're not catching other exceptions that may occur.
You could use a loop to attempt a connection two times, and break if the request is successful.

for _ in range(2):
    try:
        r = requests.get(link)
        break
    except (ConnectionError, TimeoutError):
        print("Will retry again in a little bit")
    except Exception as e:
        print(e)
    time.sleep(randint(2500,3000))

Upvotes: 0

ConorSheehan1
ConorSheehan1

Reputation: 1725

I think you should use

except (ConnectionError, TimeoutError) as e:
    print("Will retry again in a little bit")
    time.sleep(randint(2500,3000))
    r=requests.get(link)

See this similar question, or check the docs.

Upvotes: 0

Related Questions