Reputation: 4439
I have been using the requests
library to mine this website. I haven't made too many requests to it within 10 minutes. Say 25. All of a sudden, the website gives me a 404 error.
My question is: I read somewhere that getting a URL with a browser is different from getting a URL with something like a requests
. Because the requests
fetch does not get cookies and other things that a browser would. Is there an option in requests
to emulate a browser so the server doesn't think i'm a bot? Or is this not an issue?
Upvotes: 8
Views: 35373
Reputation: 311
The first answer is a bit off selenium is still detectable as its a webdriver and not a normal browser it has hardcoded values that can be detected using javascript most websites use fingerprinting libraries that can find these values luckily there is a patched chromedriver called undetecatble_chromedriver that bypasses such checks
Upvotes: 3
Reputation: 473873
Basically, at least one thing you can do is to send User-Agent
header:
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:20.0) Gecko/20100101 Firefox/20.0'}
response = requests.get(url, headers=headers)
Besides requests
, you can simulate a real user by using selenium - it uses a real browser - in this case there is clearly no easy way to distinguish your automated user from other users. Selenium can also make use a "headless" browser.
Also, check if the web-site you are scraping provides an API. If there is no API or you are not using it, make sure you know if the site actually allows automated web-crawling like this, study Terms of use
. You know, there is probably a reason why they block you after too many requests per a period of time.
Also see:
edit1: selenium uses a webdriver rather than a real browser; i.e., it passes a webdriver = TRUE
in the header, making it far easier to detect than requests
.
Upvotes: 12
Reputation: 593
Things that can help in general :
Upvotes: 12