barker
barker

Reputation: 1055

python selenium unable to locate child element

I'm writing a program to iterate through elements on a webpage. I start the browser like so:

self.browser = webdriver.Chrome(executable_path="C:/Users/me/chromedriver.exe")
self.browser.get("https://www.google.com/maps/place/Foster+Street+Coffee/@36.0016436,-78.9018397,19z/data=!4m7!3m6!1s0x89ace473f05b7d39:0x42c63a92682d9ec3!8m2!3d36.0016427!4d-78.9012927!9m1!1b1")  

this opens the site, in which I can find an element i'm interested in using:

reviews = self.browser.find_elements_by_class_name("section-review-line")  

now I have a list of elements for class name "section-review-line", which seems to populate correctly. I'd like to iterate through this list of elements and pick out subelements with a set of logic. To get the subelements, which I know exist as class name "section-review-review-content", I try this:

for review in reviews:
    content = review.find_element_by_class_name("section-review-review-content")  

This errors out with:

selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".section-review-review-content"}

Upvotes: 1

Views: 4914

Answers (2)

supputuri
supputuri

Reputation: 14135

Ok, here are all the items information that you need from each review.

reviews = driver.find_elements_by_class_name("section-review-content")
for review in reviews:
    reviewer = review.find_element_by_class_name("section-review-title").text
    numOfReviews = review.find_element_by_xpath(".//div[@class='section-review-subtitle']//span[contains(.,'reviews')]").text.strip().replace('.','')
    numberOfStarts = review.find_element_by_class_name("section-review-stars").get_attribute('aria-label').strip()
    publishDate = review.find_element_by_class_name("section-review-publish-date").text
    content = review.find_element_by_class_name("section-review-review-content").text

Upvotes: 1

barker
barker

Reputation: 1055

ah figured it out, using a strange page that had an empty element up top, causing it to error out. the large majority of the elements did not have this problem, using a try catch solved it like so:

    reviews = self.browser.find_elements_by_class_name("section-review-line")
    for review in reviews:
        try:
            content = review.find_element_by_class_name("section-review-review-content")
            rtext = content.find_element_by_class_name("section-review-text").text
        except:
            continue

Upvotes: 0

Related Questions