Anon Li
Anon Li

Reputation: 621

Python Selenium XPath not getting urls from website

I'm pretty sure this is a website specific thing because I've tried my code (modified the xpath) on other sites and it works. I'm trying to get all the PDF links on the listed website in the code line.

driver.find_elements_by_xpath(xpath) yield empty list []

Code:

def scrape_url(url):
    
    xpath = '//*[@class="panel-body"]//a'
    
    options = Options()
    options.headless = True
    # change filepath of chromedriver
    driver = webdriver.Chrome(options=options, executable_path=r'C:\Users\User\Desktop\chromedriver')
    
    try:
        driver.get(url)
        all_href_elements = driver.find_elements_by_xpath(xpath)
        print("all_href_elements", all_href_elements) # <--empty list []
        for href_element in all_href_elements:
            article_url_text = href_element.text
            print(article_url_text)
            if article_url_text == "PDF":
                article_url = href_element.get_attribute('href')
                print(article_url_text, article_url)
                if article_url:
                    self.urls.add(article_url)
                    
        print("num of urls", len(self.urls))
            
    except Exception as e:
        print(e)
        print(url)

url = 'https://www.govinfo.gov/committee/senate-armedservices?path=/browsecommittee/chamber/senate/committee/armedservices/collection/BILLS/congress/106'

scrape_url(url)

enter image description here

But using the Chrome extension XPath Helper, the XPath query should return something. I think it might be due to how the urls are dynamic and aren't generated until the pane is "opened." But the url should call for the pane to be "open" for the web driver to get, no?

How would I get around this?

Thanks

Upvotes: 0

Views: 89

Answers (1)

PDHide
PDHide

Reputation: 19959

from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

Just use explicit wait for elements:

    all_href_elements = WebDriverWait(driver, 10).until(
        EC.presence_of_all_elements_located((By.XPATH,xpath))
    )

Upvotes: 1

Related Questions