Reputation: 157
I'm trying to extract the first link from a page search, using beautiful soup, but it can't find the link for some reason.
from requests import get
from bs4 import BeautifulSoup
import requests
band = "it's my life bon jovi"
url = f'https://www.letras.mus.br/?q={band}'
res = requests.get(url)
soup = BeautifulSoup(res.content, 'html.parser')
linkurl = soup.find_all("div", class_="wrapper")
for urls in linkurl:
print(urls.get('href'))
#print(soup.a['href']) -- return /
#print(soup.a['data-ctorig]) -- return nothing
I would like to get the link of the data-ctorig or the href, does this link have a script that is preventing me from looking for this information, or is it a problem with my code?
Upvotes: 0
Views: 137
Reputation: 84455
The website uses google programmable search engine (CSE) to return cached results. This required JavaScript to run in a browser which doesn't happen with requests.
It is far easier to use selenium and a more targeted css selector list to retrieve results.
While the wait doesn't seem to be needed in this case I have added it for good measure.
from selenium.webdriver.common.by import By
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
band = "it's my life bon jovi"
url = f'https://www.letras.mus.br/?q={band}'
d = webdriver.Chrome()
d.get(url)
links = WebDriverWait(d,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, ".gsc-thumbnail-inside .gs-title[target]")))
links = [link.get_attribute('href') for link in links]
print(links[0])
Upvotes: 1