Python Learner
Python Learner

Reputation: 185

Beautiful Soup - HTML Parser seems to not pull in things after comment

Just started learning python (3.8), building a scraper to get some football stats. Here's the code so far.

I originally wanted to pull a div with id = 'div_alphabet' which is clearly in the html tree on the website, but for some reason bs4 wasn't pulling it in. I investigated further and noticed that when I pull in the parent div 'all_alphabet' and then look for all child divs, 'div_alphabet' is missing. The only thing weird about the html structure is the long block comment that sits right above 'div_alphabet'. Is this a potential issue?

https://www.pro-football-reference.com/players

import requests
from bs4 import BeautifulSoup

URL = 'https://www.pro-football-reference.com/'
homepage = requests.get(URL)
home_soup = BeautifulSoup(homepage.content, 'html.parser')

players_nav_URL = home_soup.find(id='header_players').a['href']

players_directory_page = requests.get(URL + players_nav_URL)
players_directory_soup = BeautifulSoup(players_directory_page.content, 'html.parser')

alphabet_nav = players_directory_soup.find(id='all_alphabet')
all_letters = alphabet_nav.find_all('div')
print(all_letters)

Upvotes: 1

Views: 185

Answers (2)

UWTD TV
UWTD TV

Reputation: 910

Something like this cod will make it:

import requests
from bs4 import BeautifulSoup


headers = {'User-Agent': 'Mozilla/5.0 '}
r = requests.get('https://www.pro-football-reference.com/players/', headers=headers)

soup = BeautifulSoup(r.text, 'lxml')
data = soup.select('ul.page_index li div')
for link in data:
    print(*[f'{a.get("href")}\n' for a in link.select('a')])

A more useful way to do this is to make a DataFrame with pandas of it and save it as a csv or something:

import requests
from bs4 import BeautifulSoup
import pandas as pd

players = []

headers = {'User-Agent': 'Mozilla/5.0 '}
r = requests.get('https://www.pro-football-reference.com/players/', headers=headers)

soup = BeautifulSoup(r.text, 'lxml')
data = soup.select('ul.page_index li div a')
for link in data:
    players.append([link.get_text(strip=True), 'https://www.pro-football-reference.com' + link.get('href')])
print(players[0])
df = pd.DataFrame(players, columns=['Player name', 'Url'])
print(df.head())
df.to_csv('players.csv', index=False)

Upvotes: 1

AaronS
AaronS

Reputation: 2335

links = [a['href'] for a in players_directory_soup.select('ul.page_index li div a')]
names = [a.get_text() for a in players_directory_soup.select('ul.page_index li div a')]

This gives you a list and names of all the relative links of alphabetised players.

I wouldn't concern yourself with the div_alphabet it doesn't have any useful information.

Here we are selecting the ul tag with class "page_index". But you'll get a list, so we need to do a for loop and grab the href attribute. The get_text() also gives you the names.

If you haven't come across list comprehensions then this would also be acceptable.

links = []
for a in players_directory_soup.select('ul.page_index li div a'):
    links.append(a['href'])

names = [] 
for a in players_directory_soup.select('ul.page_index li div a'):
    names.append(a.get_text())

Upvotes: 1

Related Questions