Aiden
Aiden

Reputation: 319

Beautifulsoup 4 Filtering Python 3 Issue

Well I have been looking at this for 6 hours and can't figure this out. I want to use Beautifulsoup to filter data from a webpage but I can't get .contents or get_text() to work and I have no clue where I am going wrong or how to do another filter on the first pass. I can get to the "fields tag" but can't narrow down to the

tags to get the data. Sorry if this is a simple issue that I am doing wrong, I only started Python yesterday and started (trying atleast) web scraping this morning.

Entire Code:

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from openpyxl import Workbook
import bs4 as bs
import math


book = Workbook()
sheet = book.active

i=0

#Change this value to your starting tracking number
StartingTrackingNumber=231029883

#Change this value to increase or decrease the number of tracking numbers you       want to search overal
TrackingNumberCount = 4

#Number of Tacking Numbers Searched at One Time
QtySearch = 4


#TrackingNumbers=["Test","Test 2"]


for i in range(0,TrackingNumberCount):
    g=i+StartingTrackingNumber
    sheet.cell(row=i+1,column=1).value = 'RN' + str(g) + 'CA,'


TrackingNumbers = []
for col in sheet['A']:
     TrackingNumbers.append(col.value)

MaxRow = sheet.max_row
MaxIterations = math.ceil(MaxRow / QtySearch)
#print(MaxIterations)

RowCount = 0
LastTrackingThisPass = QtySearch

for RowCount in range (0,MaxIterations): #range(1,MaxRow):
    FirstTrackingThisPass = (RowCount)*QtySearch
    x = TrackingNumbers[FirstTrackingThisPass:LastTrackingThisPass]
    LastTrackingThisPass+=QtySearch
    driver = webdriver.Safari()
    driver.set_page_load_timeout(20)
    driver.get("https://www.canadapost.ca/cpotools/apps/track/personal/findByTrackNumber?execution=e1s1")

     driver.find_element_by_xpath('//*[contains(@id,    "trackNumbers")]').send_keys(x)
    driver.find_element_by_xpath('//*[contains(@id, "submit_button")]').send_keys(chr(13))
    driver.set_page_load_timeout(3000)
    WebDriverWait(driver,30).until(EC.presence_of_element_located((By.ID, "noResults_modal")))
    SourceCodeTest = driver.page_source

#print(SourceCodeTest)

Soup = bs.BeautifulSoup(SourceCodeTest, "lxml") #""html.parser")


z = 3

#for z in range (1,5):
#    t = str(z)
#    NameCheck = "trackingNumber" + t
##FindTrackingNumbers = Soup.find_all("div", {"id": "trackingNumber3"})
#    FindTrackingNumbers = Soup.find_all("div", {"id": NameCheck})
#    print(FindTrackingNumbers)

Info = Soup.find_all("fieldset", {"class": "trackhistoryitem"}, "strong")

print(Info.get_text())

Desired Output:

RN231029885CA N/A

RN231029884CA N/A

RN231029883CA 2017/04/04

Sample of the HTML trying to be parsed:

<fieldset class="trackhistoryitem">

                    <p><strong>Tracking No. </strong><br><input type="hidden" name="ID_RN231029885CA" value="false">RN231029885CA
                </p>




                   <p><strong>Date / Time   </strong><br>


                            <!--h:outputText value="N/A" rendered="true"/>
                            <h:outputText value="N/A - N/A" rendered="false"/>

                            <h:outputText value="N/A" rendered="false"/-->N/A
                    </p>



                <p><strong>Description  </strong><br><span id="tapListResultForm:tapResultsItems:1:trk_rl_div_1">

Upvotes: 0

Views: 56

Answers (1)

Tony
Tony

Reputation: 1290

Using .get_text() I got back this long ugly string:

'\nTracking No. RN231029885CA\n                \nDate / Time   \nN/A\n                    \nDescription  '

So with some of pythons string functions:

objects = []
for each in soup.find_all("fieldset"): 
    each = each.get_text().split("\n") #split the ugly string up
    each = [each[1][-13:], each[4]] #grab the parts you want, rmv extra words
    objects.append(each)

Note: This assumes all tracking numbers are 13 digits long, if not you'll need to use regex or some other creative method to extract it.

Upvotes: 1

Related Questions