Reputation: 1336
I am building a web crawler which scans websites for a twitter link. I am new to beautiful soup and I am having a very hard time. I have tried using regular expressions to parse the entire HTML of a page, but that worked less than beautiful soup. Currently my code grabs a website and attempts to parse it for a twitter URL.
Naturally i know this will not always work, but right now everything gets returned as None and never returns a twitter link, though I know the sites contain them. Further once ever 5 links i generally also received the error:
AttributeError: 'NoneType' object has no attribute 'group'
which I have specifically tested against. I really don't think this should be this hard, but given it has been, I think I must be making a huge fundamental flaw with beautifulsoup which I am just not seeing. Any ideas?
def twitter_grab(url):
hdr = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'}
req = urllib2.Request(url, headers=hdr)
response = urllib2.urlopen(req)
soup = BeautifulSoup(response, 'html.parser')
links = soup.find_all('a' or 'li')
for tag in links:
link = tag.get('href', None)
if link is not None:
text = re.search(r'http://www\.twitter\.com/(\w+)', link)
if text is not None:
handle = text.group(0)
print handle
return(handle)
Upvotes: 1
Views: 444
Reputation: 1290
You typically won't need regex in beautiful soup as each part is accessible, BS returns each tag as a dictionary so you can access the parameters as keys:
handles = [ a["href"] for a in soup.find_all("a", href=True) if("twitter" in a["href"])]
This will return all the parts that have been hyperlinked. If a website, for some reason, hasn't wrote the <a/>
tag this will miss it.
Upvotes: 1