Reputation: 3
I'm trying to scrape tables using urllib and BeautifulSoup, and I get the error:
"urllib.error.HTTPError: HTTP Error 302: The HTTP server returned a redirect error that would lead to an infinite loop. The last 30x error message was: Found"
I've heard that this is related to the site requiring cookies, but I still get this error after my 2nd attempt:
import urllib.request
from bs4 import BeautifulSoup
import re
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
file = opener.open(testURL).read().decode()
soup = BeautifulSoup(file)
tables = soup.find_all('tr',{'style': re.compile("color:#4A3C8C")})
print(tables)
Upvotes: 0
Views: 666
Reputation: 15376
A fiew suggestions:
HTTPCookieProcessor
if you must handle cookies. 'Mozilla/5.0'
and will keep redirecting. HTTPError
. opener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor())
user_agent = 'Mozilla/5.0 (Windows NT 6.1; rv:54.0) Gecko/20100101 Firefox/54.0'
opener.addheaders = [('user-agent', user_agent)]
try:
response = opener.open(testURL)
except urllib.error.HTTPError as e:
print(e)
except Exception as e:
print(e)
else:
file = response.read().decode()
soup = BeautifulSoup(file, 'html.parser')
... etc ...
Upvotes: 1