Reputation: 614
I am trying to scrape seekingalpha.com
news section as a personal project.
However, it seems I am not able to successfully emulate a browser as once I get to page 8 or so,I get the 403 forbidden output code
. If I open up my browser in private mode, I am able to browse all of the pages manually, so my IP isn't being blocked.
I am using Requests
and Beautifulsoup
in Python3.8
I have:
Added a legit User Agent as well as tried random user-agents
Using Request Session which should be automatically updating cookies, I believe (?)
Added a Referrer header
Here is my code:
import requests
import time
import random
import webbrowser
from bs4 import BeautifulSoup
import re
import sys
import os
class SeekingAlpha():
from fake_useragent import UserAgent
ua = UserAgent()
BASE_URL = 'https://seekingalpha.com/'
NEWS_URL = BASE_URL + 'articles?page={}'
def __init__(self):
self.session = requests.Session()
self.session.headers['User-Agent'] = 'Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:52.0) Gecko/20100101 Firefox/52.0'
response =self.session.get(self.BASE_URL)
response.raise_for_status()
self.session.headers['Referrer'] = 'https://seekingalpha.com/'
print(self.session.headers)
self.master_urls = []
for i in range(1,100):
page = self.session.get(self.NEWS_URL.format(i))
time.sleep(random.randint(3,5))
page.raise_for_status()
soup = BeautifulSoup(page.content, 'html.parser')
links = soup.find_all('a', href = True)
links = [link for link in links if link.has_attr("sasource") and link['sasource'] == 'all_articles']
self.master_urls.extend(links)
if __name__ == "__main__":
master_urls = SeekingAlpha()
EDIT:
Here is what I see with page 8 via browser (removed headers as not to take up too much space within post):
" LATEST ARTICLES
HIGHLIGHT:
All
Top Ideas
Editors' Picks
Small-Cap Insight
Outstanding Contribution
Most Popular
ARTICLES | NEWS | TRANSCRIPTS
Should I Open A Roth IRA Right Now? That Depends
Charles Lewis Sizemore, CFA • Thu, Apr. 30, 11:15 AM
China Continues To Lead World's Major Equity Regions In 2020
James Picerno • MCHI, SPY, VT• Thu, Apr. 30, 11:09 AM
Gold And Gas: 2 Anti-Recession Trades
Atlas Research • QQQ, UNG, SAND• Thu, Apr. 30, 11:05 AM
Excellent Total Return Bond Funds For Momentum-Based Fixed Income Portfolios
MyPlanIQ • TGMNX, BOND, DLTNX• Thu, Apr. 30, 11:04 AM
NXP's Share Price Already Assumes A Lot Of Growth And Improvement
Stephen Simpson, CFA • MCHP, RNECY, TXN• Thu, Apr. 30, 11:01 AM
[This article is one of the editors' picks] Chart Industries Worth Another Look With LNG Mostly Washed Out
Stephen Simpson, CFA • GTLS• Thu, Apr. 30, 10:53 AM
Dana Incorporated 2020 Q1 - Results - Earnings Call Presentation
SA Transcripts • DAN• Thu, Apr. 30, 10:43 AM
Don't Panic! Coronavirus, GDP, And Unemployment
CFA Institute Contributors • SPY, QQQ, DIA• Thu, Apr. 30, 10:42 AM
Predicting Depressions For Dummies, Part II
John Overstreet • SPY, QQQ, DIA• Thu, Apr. 30, 10:37 AM
Cognex Already Trading On Recovery Prospects
Stephen Simpson, CFA • FANUY, CGNX• Thu, Apr. 30, 10:29 AM
Meritor, Inc. 2020 Q2 - Results - Earnings Call Presentation
SA Transcripts • MTOR• Thu, Apr. 30, 10:28 AM
"
Upvotes: 1
Views: 410
Reputation: 414
Have you tried increasing the random sleep? I assume 3-5 is too low, and a website might shut you down after your 8th request. Either increase it, or if you get a 403 go to sleep and try again after a while.
If you really need that data ASAP, configure a Tor proxy, and use it for a while. (gives you a different external IP - drop your session just in case)
Sometimes if your bot gets too annoying, the website's owner throws you out (at least, that's my experience :-/ ).
Upvotes: 1