Reputation: 303
I am crawling many sites for data, but some links are freezing my script permanently. This shouldn't happen, since I used a timeout like this :
page = requests.get(url,timeout=4)
I want a timeout for the whole request. So when the request take 4 seconds it will stop trying.
I searched requests
documentation, and I found this code for read and connect timeout:
r = requests.get(url, timeout=(3.05, 27))
However, I get a type error when I try and use it:
Timeout value connect was (3.05, 27), but it must be an int or float.
How can I get the timeout I want?
Upvotes: 2
Views: 3293
Reputation: 631
as AntoineGa commented on May 12
https://github.com/robotframework/robotframework/issues/4761
downgrade urllib3 to 1.26.15, it works for me
pip install urllib3==1.26.15
Upvotes: 0
Reputation: 180391
Based on a related issue here with docker, it is a bug in python-requests
that has been fixed in python-requests version 2.4.3-4
. Upgrade to the latest version and you should be fine.
If you have pip use pip install -U requests
Upvotes: 7