WIT
WIT

Reputation: 1083

Trying to extract dynamic table (url doesn't change) with selenium/beautiful soup

I've been trying to extract the following table that I get through using chromedriver to automate input and then an anti-captcha service and I saw an example where someone used beautiful soup after the table was generated.

It's a multi-page table but I was just trying to even get the first page before trying to figure out how to click through the other pages, I'm not sure if I can use beautiful soup because when I try the code below I get the first row "No properties to display." which would be if there were no search results and there are.

I can't embed an image here as my rank isn't high enough (sorry I'm new to this and annoying, I tried to figure this out before I posted for hours), but if you visit the website and search "Al" or any input you can see the table html https://claimittexas.org/app/claim-search

here is my code-

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup
from python_anticaptcha import AnticaptchaClient, NoCaptchaTaskProxylessTask
import re
import pandas as pd
import os
import time
import requests

parsed_table_date = []
url = "https://claimittexas.org/app/claim-search"
driver = webdriver.Chrome()
driver.implicitly_wait(15)
driver.get(url)
lastNameField = driver.find_element_by_xpath('//input[@id="lastName"]')
lastNameField.send_keys('Al')
api_key = #MY API key
site_key = '6LeQLyEUAAAAAKTwLC-xVC0wGDFIqPg1q3Ofam5M'  # grab from site
client = AnticaptchaClient(api_key)
task = NoCaptchaTaskProxylessTask(url, site_key)
job = client.createTask(task)
print("Waiting to solution by Anticaptcha workers")
job.join()
# Receive response
response = job.get_solution_response()
print("Received solution", response)
# Inject response in webpage
driver.execute_script('document.getElementById("g-recaptcha-response").innerHTML = "%s"' % response)
# Wait a moment to execute the script (just in case).
time.sleep(1)
# Press submit button
driver.find_element_by_xpath('//button[@type="submit" and @class="btn-std"]').click()
time.sleep(1)
html = driver.page_source
soup = BeautifulSoup(html, "lxml")
table = soup.find("table", { "class" : "claim-property-list" })
table_body = table.find('tbody')
#rows = table_body.find_all('tr')
for row in table_body.findAll('tr'):
    print(row)
    for col in row.findAll('td'):
        print(col.text.strip())

Upvotes: 3

Views: 755

Answers (1)

Andrei
Andrei

Reputation: 5637

You are getting No properties to display. beacuse of this:

img

Instead you have to iterate from second index of elements:

//tbody/tr[2]/td[2]
//tbody/tr[2]/td[3]
//tbody/tr[2]/td[4]
...
//tbody/tr[3]/td[2]
//tbody/tr[3]/td[3]
//tbody/tr[3]/td[4]
...

So you have to specify the start index from your iteration like this:

rows = driver.find_elements_by_xpath("//tbody/tr")
for row in rows[1:]:
    print(row.text) # prints the whole row
    for col in row.find_elements_by_xpath('td')[1:]:
        print(col.text.strip())

The code above have following output:

CLAIM # this is button value
37769557 1ST TEXAS LANDSCAPIN 6522 JASMINE ARBOR LANE HOUSTON TX 77088 MOTEL 6 OPERATING LP ACCOUNTS PAYABLE $351.00 2010
37769557
1ST TEXAS LANDSCAPIN
6522 JASMINE ARBOR LANE
HOUSTON
TX
77088
MOTEL 6 OPERATING LP
ACCOUNTS PAYABLE
$351.00
2010
CLAIM # this is button value
38255919 24X7 APARTMENT FIND OF TEXAS 1818 MOSTON DR SPRING TX 77386 NOT DISCLOSED NOT DISCLOSED $88.76 2017
38255919
24X7 APARTMENT FIND OF TEXAS
1818 MOSTON DR
SPRING
...

Upvotes: 1

Related Questions