Reputation: 357
import requests
from bs4 import BeautifulSoup
data = requests.get("http://www.basketball-reference.com/leagues/NBA_2014_games.html")
soup = BeautifulSoup(data.content)
soup.find_all("a")
for link in soup.find_all("a"):
"<a href='%s'>%s</a>" %(link.get("href=/boxscores"),link.text)
I am trying to get the links for the box scores only. Then run a loop and organize the data from the individual links into a csv. I need to save the links as vectors and run a loop....then I am stuck and I am not sure if this is even the proper way to do it.
Upvotes: 1
Views: 516
Reputation: 473873
The idea is to iterate over all links that have href
attribute (a[href]
CSS Selector), then loop over the links and construct an absolute link if href
attribute value doesn't start with http
. Collect all links into a list of lists and use writerows()
to dump it to csv:
import csv
from urlparse import urljoin
from bs4 import BeautifulSoup
import requests
base_url = 'http://www.basketball-reference.com'
data = requests.get("http://www.basketball-reference.com/leagues/NBA_2014_games.html")
soup = BeautifulSoup(data.content)
links = [[urljoin(base_url, link['href']) if not link['href'].startswith('http') else link['href']]
for link in soup.select("a[href]")]
with open('output.csv', 'wb') as f:
writer = csv.writer(f)
writer.writerows(links)
output.csv
now contains:
http://www.sports-reference.com
http://www.baseball-reference.com
http://www.sports-reference.com/cbb/
http://www.pro-football-reference.com
http://www.sports-reference.com/cfb/
http://www.hockey-reference.com/
http://www.sports-reference.com/olympics/
http://www.sports-reference.com/blog/
http://www.sports-reference.com/feedback/
http://www.basketball-reference.com/my/auth.cgi
http://twitter.com/bball_ref
...
It is unclear what your output should be, but this is, at least, what you can use a starting point.
Upvotes: 2