Reputation: 47
I'm doing a course project but the data I got from Amazon is missing products' names, prices and categories. Since I don't have AWS account for API, I decided to scrape this info based on ASIN (product ID), which I have. But I don't really know much about web scraping yet (XML structure, for example). The scraping part of the code is adapted from a functional forum scraping project, but it's not working here.
I also tried BeautifulSoup, that I even found specifically from a similar Amazon project, but it didn't work either. Since Selenium is more versatile, I'd really prefer to learn this way. So, here's the code, with the not functional XPath:
from selenium import webdriver
from random import randint
asin_set = ['0151004714', '0380709473','0511189877', '0528881469', '0545105668', '0557348153', '0594033926', '0594296420', '0594450268', '0594451647', '0594459451', '0594481902', '059449771X']
driver = webdriver.Chrome()
list_of_dicts[:] = []
print('This is gonna be LEGEN... wait for it:')
for i in asin_set[:5]:
url = f'https://www.amazon.com/gp/product/{i}'
driver.get(url)
product_info = {}
product_info['asin'] = i
try:
name = driver.find_elements_by_xpath('//*[@id="' + x + '"]') #<---
product_info['name'] = name.text('productTitle') #<---
except:
product_info['name'] = 0
try:
price = driver.find_elements_by_xpath('//*[@id="' + x + '"]') #<---
product_info['price'] = price.text #<---
except:
product_info['price'] = 0
try:
category = driver.find_elements_by_xpath('//*[@id="' + x + '"]/ul/li[5]/span/a') #<---
product_info['category'] = category.get_attribute('wayfinding-breadcrumbs_feature_div') #<---
except:
product_info['category'] = 0
list_of_dicts.append(product_info) # Append scrape to dictionary
print(str(len(list_of_dicts)) + ' . ', end='') # print the current length of the scrapes
sleep(randint(1,2)) # Sleep 1 or 2 seconds in bewteen scrapes
print('DARY!')
The cell runs fine, the browser opens each page. But things are not being correctly accessed or stored, and the result I'm getting of list_of_dicts is this:
[{'asin': '0151004714', 'name': 0, 'price': 0, 'category': 0},
{'asin': '0380709473', 'name': 0, 'price': 0, 'category': 0},
{'asin': '0511189877', 'name': 0, 'price': 0, 'category': 0},
{'asin': '0528881469', 'name': 0, 'price': 0, 'category': 0},
{'asin': '0545105668', 'name': 0, 'price': 0, 'category': 0}]
Upvotes: 0
Views: 3752
Reputation: 33384
Instead of sleep use WebDriverWait
() and wait for visibility_of_element_located
() and use the following xpath.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
asin_set = ['0151004714', '0380709473','0511189877', '0528881469', '0545105668', '0557348153', '0594033926', '0594296420', '0594450268', '0594451647', '0594459451', '0594481902', '059449771X']
driver = webdriver.Chrome()
list_of_dicts= []
print('This is gonna be LEGEN... wait for it:')
for i in asin_set:
url = 'https://www.amazon.com/gp/product/{}'.format(i)
driver.get(url)
product_info = {}
product_info['asin'] = i
WebDriverWait(driver,10).until(EC.visibility_of_element_located((By.XPATH,'//span[@id="productTitle"]')))
try:
name = driver.find_element_by_xpath('//span[@id="productTitle"]')
product_info['name'] = name.text.strip()
except:
product_info['name'] = 0
try:
price = driver.find_element_by_xpath("(//span[contains(@class,'a-color-price')])[1]")
product_info['price'] = price.text
except:
product_info['price'] = 0
try:
category = driver.find_element_by_xpath("(//span[@class='a-list-item']/a)[last()]")
product_info['category'] = category.text.strip()
except:
product_info['category'] = 0
list_of_dicts.append(product_info) # Append scrape to dictionary
print(str(len(list_of_dicts)) + ' . ', end='') # print the current length of the scrapes
print('DARY!')
print(list_of_dicts)
Console output:
This is gonna be LEGEN... wait for it: 1 . 2 . 3 . 4 . 5 . 6 . 7 . 8 . 9 . 10 . 11 . 12 . 13 . DARY!
[{'price': '$16.80', 'name': 'The Last Life: A Novel', 'asin': '0151004714', 'category': 'eBook Readers'}, {'price': '$11.10', 'name': "Crows Can't Count", 'asin': '0380709473', 'category': 'eBook Readers'}, {'price': '$4.00', 'name': 'URC CLIKR-5 Time Warner Cable Remote Control UR5U-8780L', 'asin': '0511189877', 'category': 'Remote Controls'}, {'price': 'Currently unavailable.', 'name': 'Rand McNally 528881469 7-inch Intelliroute TND 700 Truck GPS', 'asin': '0528881469', 'category': 'Trucking GPS'}, {'price': '$13.97', 'name': 'Elephant Run', 'asin': '0545105668', 'category': 'eBook Readers'}, {'price': '$83.59', 'name': 'Knighthorse', 'asin': '0557348153', 'category': 'eBook Readers'}, {'price': 'Currently unavailable.', 'name': 'Barnes & Noble Dessin Leather Cover for Nook Color & Nook Tablet Digital Reader - Noir', 'asin': '0594033926', 'category': 'eBook Readers & Accessories'}, {'price': 'Currently unavailable.', 'name': 'Barnes & Noble Power Adapter for Nook Simple Touch', 'asin': '0594296420', 'category': 'AC Adapters'}, {'price': 'Currently unavailable.', 'name': 'Nook Hd + 9-Inch Groovy Protective Stand Cover, Storm Gray', 'asin': '0594450268', 'category': 'Cases'}, {'price': '$15.99', 'name': 'Barnes & Noble HDTV Adapter Kit for NOOK HD and NOOK HD+', 'asin': '0594451647', 'category': 'Chargers & Adapters'}, {'price': 'Only 3 left in stock - order soon.', 'name': 'Barnes & Noble Nook Color Tablet USB Cable Charger Newest Re-enforced Version', 'asin': '0594459451', 'category': 'Power Cables'}, {'price': '$47.88', 'name': 'Barnes & Noble OV/HB Universal Power Kit for Nook HD & HD+', 'asin': '0594481902', 'category': 'Power Adapters'}, {'price': '$39.88', 'name': 'Barnes & Noble Replacement Charging Sync Cable for Nook HD and HD+ (5 Feet)', 'asin': '059449771X', 'category': 'Power Cables'}]
Upvotes: 2