user11669928
user11669928

Reputation:

How to extract data from two tables in a page with same class?

I want to get or select data from two different tables with the same class.

I tried getting it from 'soup.find_all' but formatting the data is getting tough.

There are many tables with the same class. I need to get only values(no label) from the tables.

URL: https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/

TABLE 1:

<div class="bh_collapsible-body" style="display: none;">
    <table border="0" cellpadding="2" cellspacing="2" class="prop-list">
        <tbody>
            <tr>
                <td class="item">
                    <table>
                        <tbody>
                            <tr>
                                <td class="label">Rim Material</td>
                                <td class="value">Alloy</td>
                            </tr>
                        </tbody>
                    </table>
                </td>
                <td class="item">
                    <table>
                        <tbody>
                            <tr>
                                <td class="label">Front Tyre Description</td>
                                <td class="value">215/55 R16</td>
                            </tr>
                        </tbody>
                    </table>
                </td>
            </tr>

            <tr>
                <td class="item">
                    <table>
                        <tbody>
                            <tr>
                                <td class="label">Front Rim Description</td>
                                <td class="value">16x7.0</td>
                            </tr>
                        </tbody>
                    </table>
                </td>
                <td class="item">
                    <table>
                        <tbody>
                            <tr>
                                <td class="label">Rear Tyre Description</td>
                                <td class="value">215/55 R16</td>
                            </tr>
                        </tbody>
                    </table>
                </td>
            </tr>

            <tr>
                <td class="item">
                    <table>
                        <tbody>
                            <tr>
                                <td class="label">Rear Rim Description</td>
                                <td class="value">16x7.0</td>
                            </tr>
                        </tbody>
                    </table>
                </td>
                <td></td>
            </tr>
        </tbody>
    </table>
</div>
</div> // I thing this is a extra close </div> 

TABLE 2:

<div class="bh_collapsible-body" style="display: none;">
    <table border="0" cellpadding="2" cellspacing="2" class="prop-list">
        <tbody>
            <tr>
                <td class="item">
                    <table>
                        <tbody>
                            <tr>
                                <td class="label">Steering</td>
                                <td class="value">Rack and Pinion</td>
                            </tr>
                        </tbody>
                    </table>
                </td>
                <td></td>
            </tr>
        </tbody>
    </table>
</div>
</div>// I thing this is a extra close </div>

What I have tried:

I tried getting the first table contents from Xpath but its giving with both values and labels.

table1 = driver.find_element_by_xpath("//*[@id='features']/div/div[5]/div[2]/div[1]/div[1]/div/div[2]/table/tbody/tr[1]/td[1]/table/tbody/tr/td[2]")

I tried to split the data but it didn't work. provided the URL of the page in case if you want to check

Upvotes: 1

Views: 781

Answers (3)

furas
furas

Reputation: 143017

You don't have to do it in one xpath. You can use xpath to get all <table class=prop-list> and later use index to select table from list and use another xpath to get values from this one table

I use BeautifulSoup for this but with xpath it should be similar

import requests
from bs4 import BeautifulSoup as BS

url = 'https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/'

text = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'}).text

soup = BS(text, 'html.parser')

all_tables = soup.find_all('table', {'class': 'prop-list'}) # xpath('//table[@class="prop-list"]')
#print(len(all_tables))

print("\n--- Engine ---\n")
all_labels = all_tables[3].find_all('td', {'class': 'label'}) # xpath('.//td[@class="label"]')
all_values = all_tables[3].find_all('td', {'class': 'value'}) # xpath('.//td[@class="value"]')
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))

print("\n--- Fuel ---\n")
all_labels = all_tables[4].find_all('td', {'class': 'label'})
all_values = all_tables[4].find_all('td', {'class': 'value'})
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))

print("\n--- Stearing ---\n")
all_labels = all_tables[7].find_all('td', {'class': 'label'})
all_values = all_tables[7].find_all('td', {'class': 'value'})
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))

print("\n--- Wheels ---\n")
all_labels = all_tables[8].find_all('td', {'class': 'label'})
all_values = all_tables[8].find_all('td', {'class': 'value'})
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))

Result:

--- Engine ---

Engine Type: Piston
Valves/Ports per Cylinder: 4
Engine Location: Front
Compression ratio: 10.6
Engine Size (cc) (cc): 1799
Engine Code: R18Z1
Induction: Aspirated
Power: 104kW @ 6500rpm
Engine Configuration: In-line
Torque: 174Nm @ 4300rpm
Cylinders: 4
Power to Weight Ratio (W/kg): 82.6
Camshaft: OHC with VVT & Lift

--- Fuel ---

Fuel Type: Petrol - Unleaded ULP
Fuel Average Distance (km): 734
Fuel Capacity (L): 47
Fuel Maximum Distance (km): 940
RON Rating: 91
Fuel Minimum Distance (km): 540
Fuel Delivery: Multi-Point Injection
CO2 Emission Combined (g/km): 148
Method of Delivery: Electronic Sequential
CO2 Extra Urban (g/km): 117
Fuel Consumption Combined (L/100km): 6.4
CO2 Urban (g/km): 202
Fuel Consumption Extra Urban (L/100km): 5
Emission Standard: Euro 5
Fuel Consumption Urban (L/100km): 8.7

--- Stearing ---

Steering: Rack and Pinion

--- Wheels ---

Rim Material: Alloy
Front Tyre Description: 215/55 R16
Front Rim Description: 16x7.0
Rear Tyre Description: 215/55 R16
Rear Rim Description: 16x7.0

I assume that all pages have the same tables and they have the same numbers.

Upvotes: 0

Andrej Kesely
Andrej Kesely

Reputation: 195573

The targeting of these two tables is a little bit "tricky", because they contain other tables. I used CSS selector table:has(td:contains("Rim Material")):has(table) tr:not(:has(tr)) to target first table and the same selector with string "Steering" to target second table:

from bs4 import BeautifulSoup
import requests

url = 'https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/'

headers = {'User-Agent':'Mozilla/5.0'}
soup = BeautifulSoup(requests.get(url, headers=headers).text, 'lxml')

rows = []
for tr in soup.select('table:has(td:contains("Rim Material")):has(table) tr:not(:has(tr)), table:has(td:contains("Steering")):has(table) tr:not(:has(tr))'):
    rows.append([td.get_text(strip=True) for td in tr.select('td')])

for label, text in rows:
    print('{: <30}: {}'.format(label, text))

Prints:

Steering                      : Rack and Pinion
Rim Material                  : Alloy
Front Tyre Description        : 215/55 R16
Front Rim Description         : 16x7.0
Rear Tyre Description         : 215/55 R16
Rear Rim Description          : 16x7.0

Edit: For getting data from multiple URLs:

from bs4 import BeautifulSoup
import requests

headers = {'User-Agent':'Mozilla/5.0'}

urls = ['https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/',
        'https://www.redbook.com.au/cars/details/2019-genesis-g80-38-ultimate-auto-my19/SPOT-ITM-520697/']

for url in urls:
    soup = BeautifulSoup(requests.get(url, headers=headers).text, 'lxml')

    rows = []
    for tr in soup.select('table:has(td:contains("Rim Material")):has(table) tr:not(:has(tr)), table:has(td:contains("Steering")):has(table) tr:not(:has(tr))'):
        rows.append([td.get_text(strip=True) for td in tr.select('td')])

    print('{: <30}: {}'.format('Title', soup.h1.text))
    print('-' * (len(soup.h1.text.strip())+32))
    for label, text in rows:
        print('{: <30}: {}'.format(label, text))

    print('*' * 80)

Prints:

Title                         : 2019 Honda Civic 50 Years Edition Auto MY19
---------------------------------------------------------------------------
Steering                      : Rack and Pinion
Rim Material                  : Alloy
Front Tyre Description        : 215/55 R16
Front Rim Description         : 16x7.0
Rear Tyre Description         : 215/55 R16
Rear Rim Description          : 16x7.0
********************************************************************************
Title                         : 2019 Genesis G80 3.8 Ultimate Auto MY19
-----------------------------------------------------------------------
Steering                      : Rack and Pinion
Rim Material                  : Alloy
Front Tyre Description        : 245/40 R19
Front Rim Description         : 19x8.5
Rear Tyre Description         : 275/35 R19
Rear Rim Description          : 19x9.0
********************************************************************************

Upvotes: 2

Aayush Mahajan
Aayush Mahajan

Reputation: 4033

Not a perfect solution, but if you are willing to rummage through the data a little, I'd suggest using pandas' read_html function for this.

pandas' read_html extracts all html tables in a webpage and converts it into an array of pandas dataframes.

This code seems to get all 82 table elements in the page you linked:

import pandas as pd
import requests

url = "https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/"

#Need to add a fake header to avoid 403 forbidden error
header = {
        "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 Safari/537.36",
        "X-Requested-With": "XMLHttpRequest"
        }

resp = requests.get(url, headers=header)

table_dataframes = pd.read_html(resp.text)


for i, df in enumerate(table_dataframes):
    print(f"================Table {i}=================\n")
    print(df)

This will print out all 82 tables present in the webpage. The limitation being that you will manually have to look for the table you are interested in and manipulate it accordingly. Seems to be that tables 71 and 74 are the tables you wanted.

This method would need added intelligence to make it feasible for automation.

Upvotes: 2

Related Questions