user12092724
user12092724

Reputation:

Web-scraping using selenium: moving to next pages

How can I get the following information from this website, checking gif there are more reviews in next pages? I would like to use selenium and web driver

The sole came completely unglued after about 4 months of wearing them in an office environment. I can't imagine a legitimate pair of Converse sneakers would have such shoddy quality. I'm not an expert but I think they're fake.

Either way these shoes are not worth the money.

I prefer to use selenium as I can move to the next pages easily and store data collected.

For each of these fields I should have separate lists which collect: author, dates, stars, review's title and review's body. An example could be the following:

https://www.amazon.com/Converse-Chuck-Taylor-Star-Core/dp/B07KLM7JRL/ref=sr_1_1?dchild=1&keywords=converse&qid=1596469913&sr=8-1&th=1

having 2226 rating reviews.

Do you think is something doable with selenium?

Code (the code contains missing information and probably the part of search is also wrong):

from bs4 import BeautifulSoup
import time
from selenium import webdriver
import re
def spider():
 
    driver = webdriver.Chrome('path/chromedriver'))
    

    driver.get('https://www.amazon.com/Converse-Chuck-Taylor-Star-Core/dp/B07KLM7JRL/ref=sr_1_1?dchild=1&keywords=converse&qid=1596469913&sr=8-1&th=1') #in th I should add page number info

    time.sleep(1)
    search = driver.find_element_by_name('q')
    time.sleep(2)
    search.submit()

    author = []
    dates = []
    score = []
    review_min = []
    review = []
   
    while True:
        soup = BeautifulSoup(driver.page_source,'lxml')
        result_div = soup.find_all('div', attrs={'class': 'g'})
        time.sleep(2)
        for r in result_div:
                    # here there should be the part to get info about author, dates, scores, ...
                        time.sleep(1)
# part where I append results scraped

        next_page_btn =driver.find_elements_by_xpath("//a[@id='pnnext']")
        if len(next_page_btn) <1:
            print("no more pages left")
            break

        element =WebDriverWait(driver,100).until(expected_conditions.element_to_be_clickable((By.ID,'pnnext')))
        driver.execute_script("return arguments[0].scrollIntoView();", element)
        element.click()
        time.sleep(2)

   
    driver.quit()

Upvotes: 3

Views: 1571

Answers (1)

Gravity API
Gravity API

Reputation: 879

Your solution needs to be composed of few layers. Each layer responsible for different actions and behavior.

First Layer

Responsible for navigation and pages iterations - will repeat for each page.

Second Layer

Responsible for items - will extract a single item reviews information and will be repeated for each item in a page.

This is the most tricky part since it has to open each item in a different page (if you use 'back' it will refresh and you will lose data), navigate to new page, switch, extract, close and switch back - so we go back to point 0 and ready for next item.

Third Layer

Responsible for reviews - will extract all review for a single item, will repeat all review for each item page

Summary

For Each Page Extract
    > Item, For Each Item Extract
        > Reviews

The result will be an array for review items in the following format

{
    "product": "My Product",
    "link": "https://products/my_product",
    "reviews": [
        { "author": "foo", "date": "0000-000"... },
        { "author": "bar", "date": "0000-000"... },
        ...
    ]
}

Code Sample

This will be your starting point, you can implement the missing parts. This will extract the reviews for all items in a single page.

Run the sample as is, just change the driver path.

import re

from selenium import webdriver
from selenium.webdriver import ActionChains
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.support.wait import WebDriverWait


def spider(page_number: int):
    # setup: web driver > wait object > url format > page number
    driver = webdriver.Chrome('D:\\automation-env\\web-drivers\\chromedriver.exe')
    wait = WebDriverWait(driver, 15)
    url_format =\
        "https://www.amazon.com/Converse-Chuck-Taylor-Star-Core/dp/B07KLM7JRL/" \
        "ref=sr_1_1?" \
        "dchild=1&" \
        "keywords=converse&" \
        "qid=1596469913&" \
        "sr=8-1&" \
        "th={page_number}"

    try:
        # navigate
        driver.get(url_format.format(page_number=page_number))
        driver.maximize_window()

        # search your product
        __search(driver_wait=wait, search_for='converse')

        # cache item
        rate_locator = (By.XPATH, "//i[contains(@class,'a-star-small-')]")
        items = wait.until(expected_conditions.visibility_of_all_elements_located(rate_locator))

        # product cycle
        reviews = []
        for i in range(len(items)):
            reviews.append(__product_cycle(on_driver=driver, on_element=items[i], on_element_index=i + 1))

        # output
        print(reviews)

    except Exception as e:
        print(e)

    finally:
        if driver is not None:
            driver.quit()


# execute search product
def __search(driver_wait: WebDriverWait, search_for: str):
    # search
    search = driver_wait.until(expected_conditions.element_to_be_clickable((By.ID, 'twotabsearchtextbox')))
    search.clear()
    search.send_keys(search_for)
    search.submit()


# execute an extraction on single item in the products list
# you can add more logic to extract the rest of the review
def __product_cycle(on_driver, on_element, on_element_index):
    # hover the review element
    ActionChains(driver=on_driver).move_to_element(on_element).perform()

    # open reviews in new page (the index is here to handle amazon keeping in the DOM all reviews already inspected)
    wait = WebDriverWait(on_driver, 15)
    link_element_locator = (By.XPATH, "(//a[.='See all customer reviews'])[" + f'{on_element_index}' + "]")
    link_element =\
        wait.until(expected_conditions.element_to_be_clickable(link_element_locator))
    link = link_element.get_attribute(name='href')

    on_driver.execute_script(script="window.open('about:blank', '_blank');")
    on_driver.switch_to_window(on_driver.window_handles[1])
    on_driver.get(link)

    # cache review elements
    review_locator = (By.XPATH, "//div[contains(@id,'customer_review-')]")
    review_elements = wait.until(expected_conditions.visibility_of_all_elements_located(review_locator))

    # extract reviews for page
    # if you want to iterate pages put this inside page iteration loop
    reviews = {
        "product": on_driver.title,
        "link": on_driver.current_url,
        "data": []
    }
    reviews_data = []
    for e in review_elements:
        reviews["data"].append(__get_item_review(on_driver, e))

    # return to point 0
    on_driver.close()
    on_driver.switch_to_window(on_driver.window_handles[0])

    # results
    return reviews


# extracts a single item reviews collection
def __get_item_review(on_driver, on_element) -> dict:
    # locators
    author_locator = ".//span[@class='a-profile-name']"
    date_locator = ".//span[@data-hook='review-date']"
    score_locator = ".//a[.//i[@data-hook='review-star-rating']]"
    review_locator = ".//div[@data-hook='review-collapsed']/span"

    # data
    review_data = {
        'author': on_element.find_element_by_xpath(author_locator).text.strip(),
        'date': re.findall('(?<=on ).*', on_element.find_element_by_xpath(date_locator).text.strip())[0],
        'score': re.findall('\\d+.\\d+', on_element.find_element_by_xpath(score_locator).get_attribute("title"))[0],
        'review': on_element.find_element_by_xpath(review_locator).text.strip(),
    }

    # TODO: add more logic to get also the hidden reviews for this item.

    # results data
    return review_data


spider(page_number=1)

Upvotes: 1

Related Questions