Reputation: 3311
When I try this code to scrape a web page:
#import requests
import urllib.request
from bs4 import BeautifulSoup
#from urllib import urlopen
import re
webpage = urllib.request.urlopen('http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1').read
findrows = re.compile('<tr class="- banding(?:On|Off)>(.*?)</tr>')
findlink = re.compile('<a href =">(.*)</a>')
row_array = re.findall(findrows, webpage)
links = re.finall(findlink, webpate)
print(len(row_array))
iterator = []
I get an error like:
File "C:\Python33\lib\urllib\request.py", line 160, in urlopen
return opener.open(url, data, timeout)
File "C:\Python33\lib\urllib\request.py", line 479, in open
response = meth(req, response)
File "C:\Python33\lib\urllib\request.py", line 591, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python33\lib\urllib\request.py", line 517, in error
return self._call_chain(*args)
File "C:\Python33\lib\urllib\request.py", line 451, in _call_chain
result = func(*args)
File "C:\Python33\lib\urllib\request.py", line 599, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
Does the website think I'm a bot? How can I fix the problem?
Upvotes: 188
Views: 343291
Reputation: 85
Sometimes a lot of techniques doesn't work. So the final way is to get the content of the Google Cache.
import requests
# The headers
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:101.0) Gecko/20100101 Firefox/101.0'}
# The URL you want to scrap
url_2_scrap = 'https://www.my_url.com'
# Full URL to get the content
url_full = 'https://webcache.googleusercontent.com/search?q=cache:' + url_2_scrap
# Response of the request
response = requests.get(url_full, headers=headers)
# If the status is good,
if response.status_code == 200:
print("OK! It works fine! ;-)")
# If its not good,
else:
print("It doesn't work :-(")
Upvotes: 0
Reputation: 64
An easy straight forward approach:
from bs4 import BeautifulSoup
import requests
response = requests.get(url)
web_page = response.text
soup = BeautifulSoup(web_page, "html.parser")
Upvotes: 1
Reputation: 31
Open the developer tools and open the network tap. chose among the items u want yo scrap, the expanding details will have the user agent and add it there
Upvotes: 0
Reputation: 352
I pulled my hair out with this for a while and the answer ended up being pretty simple. I checked the response text and I was getting "URL signature expired" which is a message you wouldn't normally see unless you checked the response text.
This means some URLs just expire, usually for security purposes. Try to get the URL again and update the URL in your script. If there isn't a new URL for the content you're trying to scrape, then unfortunately you can't scrape for it.
Upvotes: 0
Reputation: 33046
This is probably because of mod_security
or some similar server security feature which blocks known spider/bot user agents (urllib
uses something like python urllib/3.3.0
, it's easily detected). Try setting a known browser user agent with:
from urllib.request import Request, urlopen
req = Request(
url='http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1',
headers={'User-Agent': 'Mozilla/5.0'}
)
webpage = urlopen(req).read()
This works for me.
By the way, in your code you are missing the ()
after .read
in the urlopen
line, but I think that it's a typo.
TIP: since this is exercise, choose a different, non restrictive site. Maybe they are blocking urllib
for some reason...
Upvotes: 375
Reputation: 1060
you can use urllib's build_opener like this:
opener = urllib.request.build_opener()
opener.addheaders = [('User-Agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36'), ('Accept','text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8'), ('Accept-Encoding','gzip, deflate, br'),\
('Accept-Language','en-US,en;q=0.5' ), ("Connection", "keep-alive"), ("Upgrade-Insecure-Requests",'1')]
urllib.request.install_opener(opener)
urllib.request.urlretrieve(url, "test.xlsx")
Upvotes: 2
Reputation: 11
I ran into this same problem and was not able to solve it using the answers above. I ended up getting around the issue by using requests.get() and then using the .text of the result instead of using read():
from requests import get
req = get(link)
result = req.text
Upvotes: 1
Reputation: 370
Adding cookie to the request headers worked for me
from urllib.request import Request, urlopen
# Function to get the page content
def get_page_content(url, head):
"""
Function to get the page content
"""
req = Request(url, headers=head)
return urlopen(req)
url = 'https://example.com'
head = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.84 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive',
'refere': 'https://example.com',
'cookie': """your cookie value ( you can get that from your web page) """
}
data = get_page_content(url, head).read()
print(data)
Upvotes: 5
Reputation: 2769
"This is probably because of mod_security or some similar server security feature which blocks known
spider/bot
user agents (urllib uses something like python urllib/3.3.0, it's easily detected)" - as already mentioned by Stefano Sanfilippo
from urllib.request import Request, urlopen
url="https://stackoverflow.com/search?q=html+error+403"
req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})
web_byte = urlopen(req).read()
webpage = web_byte.decode('utf-8')
The web_byte is a byte object returned by the server and the content type present in webpage is mostly utf-8. Therefore you need to decode web_byte using decode method.
This solves complete problem while I was having trying to scrape from a website using PyCharm
P.S -> I use python 3.4
Upvotes: 28
Reputation: 127
Based on previous answers this has worked for me with Python 3.7 by increasing the timeout to 10.
from urllib.request import Request, urlopen
req = Request('Url_Link', headers={'User-Agent': 'XYZ/3.0'})
webpage = urlopen(req, timeout=10).read()
print(webpage)
Upvotes: 10
Reputation: 101
If you feel guilty about faking the user-agent as Mozilla (comment in the top answer from Stefano), it could work with a non-urllib User-Agent as well. This worked for the sites I reference:
req = urlrequest.Request(link, headers={'User-Agent': 'XYZ/3.0'})
urlrequest.urlopen(req, timeout=10).read()
My application is to test validity by scraping specific links that I refer to, in my articles. Not a generic scraper.
Upvotes: 3
Reputation: 13
You can try in two ways. The detail is in this link.
1) Via pip
pip install --upgrade certifi
2) If it doesn't work, try to run a Cerificates.command that comes bundled with Python 3.* for Mac:(Go to your python installation location and double click the file)
open /Applications/Python\ 3.*/Install\ Certificates.command
Upvotes: 1
Reputation: 1557
Definitely it's blocking because of your use of urllib based on the user agent. This same thing is happening to me with OfferUp. You can create a new class called AppURLopener which overrides the user-agent with Mozilla.
import urllib.request
class AppURLopener(urllib.request.FancyURLopener):
version = "Mozilla/5.0"
opener = AppURLopener()
response = opener.open('http://httpbin.org/user-agent')
Upvotes: 57
Reputation: 16371
Since the page works in browser and not when calling within python program, it seems that the web app that serves that url recognizes that you request the content not by the browser.
Demonstration:
curl --dump-header r.txt http://www.cmegroup.com/trading/products/#sortField=oi&sortAsc=false&venues=3&page=1&cleared=1&group=1
...
<HTML><HEAD>
<TITLE>Access Denied</TITLE>
</HEAD><BODY>
<H1>Access Denied</H1>
You don't have permission to access ...
</HTML>
and the content in r.txt has status line:
HTTP/1.1 403 Forbidden
Try posting header 'User-Agent' which fakes web client.
NOTE: The page contains Ajax call that creates the table you probably want to parse. You'll need to check the javascript logic of the page or simply using browser debugger (like Firebug / Net tab) to see which url you need to call to get the table's content.
Upvotes: 2