Reputation: 1740
I'm trying to get the table text values from a td
tag, but I always get an empty list.
Here is the link from where I'm trying to extract table values.
Here is what I have tried.
response = requests.get('https://www.international-pc.com/product/interfine-629')
soup = BeautifulSoup(response.text, 'html.parser')
tables = soup.find("table", {"id": "documentTable-1"}).find_all("tbody")
print(tables)
Output : []
the HTML
<table id="documentTable-1" class="display dataTable no-footer" data-table="" role="grid" aria-describedby="documentTable-1_info" style="width: 1138px;">
<thead>
<tr role="row"><th class="sorting_asc" tabindex="0" aria-controls="documentTable-1" rowspan="1" colspan="1" style="width: 391px;" aria-sort="ascending" aria-label="PRODUCT DATASHEET: activate to sort column descending">PRODUCT DATASHEET</th><th class="sorting" tabindex="0" aria-controls="documentTable-1" rowspan="1" colspan="1" style="width: 455px;" aria-label="LANGUAGE: activate to sort column ascending">LANGUAGE</th><th class="sorting" tabindex="0" aria-controls="documentTable-1" rowspan="1" colspan="1" style="width: 232px;" aria-label="DOWNLOAD: activate to sort column ascending">DOWNLOAD</th></tr>
</thead>
<tbody><tr role="row" class="odd"><td class="sorting_1">Interfine 629</td><td>English (United Kingdom)</td><td><a href="https://international.brand.akzonobel.com/m/1ff7b0196600886b/original/Interfine_629_eng_A4_20151012.pdf" target="_blank">PDF</a></td></tr><tr role="row" class="even"><td class="sorting_1">Interfine 629</td><td>Korean (Korea, Republic of)</td><td><a href="https://international.brand.akzonobel.com/m/664b77540ff01960/original/Interfine_629_kor_A4_19000101.pdf" target="_blank">PDF</a></td></tr><tr role="row" class="odd"><td class="sorting_1">Interfine 629</td><td>Chinese (China)</td><td><a href="https://international.brand.akzonobel.com/m/6980eb615ebe99f0/original/Interfine_629_chi_s_A4_20150205.pdf" target="_blank">PDF</a></td></tr></tbody></table>
I want to extract all three rows text values from the table .
Any suggestions?
Upvotes: 1
Views: 1379
Reputation: 4315
https://www.international-pc.com/product/interfine-629
website link is dynamic rendering request table data
. You should try automation selenium
library. it allows you to scrape dynamic rendering request(js or ajax) page data.
Try this:
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Chrome("/usr/bin/chromedriver")
driver.get('https://www.international-pc.com/product/interfine-629')
soup = BeautifulSoup(driver.page_source, 'lxml')
tables = soup.find("table", {"id": "documentTable-1"}).find("tbody")
for tr in tables.find_all("tr"):
for td in tr.find_all("td"):
print(td.text)
link = td.find("a",href=True)
if link is None:
continue
print(link['href'])
O/P:
Interfine 629
Chinese (China)
PDF
https://international.brand.akzonobel.com/m/6980eb615ebe99f0/original/Interfine_629_chi_s_A4_20150205.pdf
Interfine 629
Korean (Korea, Republic of)
PDF
https://international.brand.akzonobel.com/m/664b77540ff01960/original/Interfine_629_kor_A4_19000101.pdf
Interfine 629
English (United Kingdom)
PDF
https://international.brand.akzonobel.com/m/1ff7b0196600886b/original/Interfine_629_eng_A4_20151012.pdf
where '/usr/bin/chromedriver'
selenium web driver path.
Download selenium web driver for chrome browser:
http://chromedriver.chromium.org/downloads
Install web driver for chrome browser:
https://christopher.su/2015/selenium-chromedriver-ubuntu/
Selenium tutorial:
https://selenium-python.readthedocs.io/
Upvotes: 2
Reputation: 28650
you could also just go straight to the data source without using Selenium:
import requests
from pandas.io.json import json_normalize
url = 'https://www.international-pc.com/get/ajax/2305/TDS'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.131 Safari/537.36'}
params = {
'draw':'1',
'columns[0][data]':'PRODUCT DATASHEET',
'columns[0][name]':'',
'columns[0][searchable]':'true',
'columns[0][orderable]':'true',
'columns[0][search][value]':'',
'columns[0][search][regex]':'false',
'columns[1][data]':'LANGUAGE',
'columns[1][name]':'',
'columns[1][searchable]':'true',
'columns[1][orderable]':'true',
'columns[1][search][value]':'',
'columns[1][search][regex]':'false',
'columns[2][data]':'DOWNLOAD',
'columns[2][name]':'',
'columns[2][searchable]':'true',
'columns[2][orderable]':'true',
'columns[2][search][value]':'',
'columns[2][search][regex]':'false',
'order[0][column]':'0',
'order[0][dir]':'asc',
'start':'0',
'length':'10',
'search[value]':'',
'search[regex]':'false'}
data = requests.post(url, headers=headers, data=params).json()
df = json_normalize(data['data'])
Output:
print (df)
DOWNLOAD ... PRODUCT DATASHEET
0 <a href="https://international.brand.akzonobel... ... Interfine 629
1 <a href="https://international.brand.akzonobel... ... Interfine 629
2 <a href="https://international.brand.akzonobel... ... Interfine 629
[3 rows x 3 columns]
Upvotes: 0