Reputation: 333
How can I scrape the following structure to only get h3,h4 class above h5 string ="Prem League" and div class="fixres_item" directly below h5 string "Prem League".
I would want the text from h3, h4 and inside the div I need the text from a span , inside a span
So when h5 class string is Prem League I want the h4 and h3 directly above, and also I need to geyt various elements our of the fixres_item directly below h5 class string = Prem League
<div class="fixres__body" data-url="" data-view="fixture-update" data-controller="fixture-update" data-fn="live-refresh" data-sport="football" data-lite="true" id="widgetLite-6">
<h3 class="fixres__header1">November 2018</h3>
<h4 class="fixres__header2">Saturday 24th November</h4>
<h5 class="fixres__header3">Prem League</h5>
<div class="fixres__item">stuff in here</div>
<h4 class="fixres__header2">Wednesday 28th November</h4>
<h5 class="fixres__header3">UEFA Champ League</h5>
<div class="fixres__item">stuff in here</div>
<h3 class="fixres__header1">December 2018</h3>
<h4 class="fixres__header2">Sunday 2nd December</h4>
<h5 class="fixres__header3">Prem League</h5>
<div class="fixres__item">stuff in here</div>
This is the code I have so far, but this includes data from divs below h5 string "EUFA Champ League" - which I do not want. I only want data from the divs that are below h5 heading "Prem League". For example I do not want PSG in the output because it comes from the div below h5 heading "EUFA Champ League"
My Code -
def squad_fixtures():
team_table = ['https://someurl.com/liverpool-fixtures']
for i in team_table:
# team_fixture_urls = [i.replace('-squad', '-fixtures') for i in team_table]
squad_r = requests.get(i)
premier_squad_soup = BeautifulSoup(squad_r.text, 'html.parser')
# print(premier_squad_soup)
premier_fix_body = premier_squad_soup.find('div', {'class': 'fixres__body'})
# print(premier_fix_body)
premier_fix_divs = premier_fix_body.find_all('div', {'class': 'fixres__item'})
for i in premier_fix_divs:
team_home = i.find_all('span', {'class': 'matches__item-col matches__participant matches__participant--side1'})
for i in team_home:
team_home_names = i.find('span', {'class': 'swap-text--bp30'})['title']
team_home_namesall.append(team_home_names)
print(team_home_namesall)
The output
['Watford', 'PSG', 'Liverpool', 'Burnley', "B'mouth", 'Liverpool', 'Liverpool', 'Wolves', 'Liverpool', 'Liverpool', 'Man City', 'Brighton', 'Liverpool', 'Liverpool', 'West Ham', 'Liverpool', 'Man Utd', 'Liverpool', 'Everton', 'Liverpool', 'Fulham', 'Liverpool', "So'ton", 'Liverpool', 'Cardiff', 'Liverpool', 'Newcastle', 'Liverpool']
Upvotes: 1
Views: 1056
Reputation: 5303
It seems like your challenge is restricting the scraping to just the Premier League
<h5>
and its associated content.
Note: Your question states the
string
of theh5
should bePrem League
, but it in fact appears to bePremier League
when I look at the response.
This HTML appears to be pretty flat and undifferentiated in structure, so it looks like the best thing is to walk through the siblings previous and next from the h5, which is itself fairly easy to locate:
import re
from bs4 import BeautifulSoup, Tag
import requests
prem_league_regex = re.compile(r"Premier League")
def squad_fixtures():
team_table = ['https://www.skysports.com/liverpool-fixtures']
for i in team_table:
squad_r = requests.get(i)
soup = BeautifulSoup(squad_r.text, 'html.parser')
body = soup.find('div', {'class': 'fixres__body'})
h5s = body.find_all('h5', {'class': 'fixres__header3'}, text=prem_league_regex)
for h5 in h5s:
prev_tag = find_previous(h5)
if prev_tag.name == 'h4':
print(prev_tag.text)
prev_tag = find_previous(prev_tag)
if prev_tag.name == 'h3':
print(prev_tag.text)
fixres_item_div = find_next(h5)
"""
get the things you need from fixres__item now that you have it...
"""
def find_previous(tag):
prev_tag = tag.previous_sibling
while(not isinstance(prev_tag, Tag)):
prev_tag = prev_tag.previous_sibling
return prev_tag
def find_next(tag):
next_tag = tag.next_sibling
while(not isinstance(next_tag, Tag)):
next_tag = next_tag.next_sibling
return next_tag
Upvotes: 1