Reputation: 39
I'm trying to scrape this site:https://www.wagr.com/mens-ranking, at the bottom right of the table there is a button to click to the next page, but selenium keeps throwing exceptions when I try to click it. The code below is what I'm using to click the button.
next = driver.find_element(By.CSS_SELECTOR,'.next > a:nth-child(1)')
next.click()
Here's a screenshot of the traceback:
I can't understand why this isn't working, I'd be grateful for any tips.
Upvotes: 1
Views: 146
Reputation: 1076
You need to
Here is a working code -
import time
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
options = webdriver.ChromeOptions()
# options.add_argument("--headless")
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
chrome_driver = webdriver.Chrome(
service=Service(ChromeDriverManager().install()),
options=options
)
with chrome_driver as driver:
driver.implicitly_wait(15)
driver.get('https://www.wagr.com/mens-ranking')
time.sleep(3)
# click cookie popup
cookie_btn = driver.find_element(By.XPATH, "/html/body/div[2]/div[3]/div/div/div[2]/div[1]/button")
cookie_btn.click()
time.sleep(0.3)
# scrolling bottom
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(2)
next_btn = driver.find_element(By.CSS_SELECTOR, '.next > a:nth-child(1)') # li.next
# next_btn = driver.find_element(By.XPATH, "//li[@class='next']")
print("found and click next", next_btn.tag_name)
next_btn.click()
time.sleep(2)
driver.quit()
Upvotes: 1