vesuvius
vesuvius

Reputation: 435

How to save all the scraped data from a website in a pandas dataframe?

I've written a code which scrapes the contact information from a webpage using BeautifulSoup and a pre-designed library CommonRegex which is basically regular expressions to scrape US address information.While I'm able to extract the information which is in the form of a list and convert it into pandas dataframe, I'm not able to save all the values present in a list. This is the code I've written:

import pandas as pd
from commonregex import CommonRegex
from urllib.request import urlopen
from bs4 import BeautifulSoup

url = 'https://www.thetaxshopinc.com/pages/contact-tax-accountant-brampton'
html = urlopen(url)
soup = BeautifulSoup(html, 'lxml')

for link in soup.find_all('p'):
    df = CommonRegex()
    df1 = df.street_addresses(link.get_text())
    df2 = df.phones(link.get_text())
    df3 = df.emails(link.get_text())
    for i in df1:
        dfr = pd.DataFrame([i], columns = ['Address'])
    for j in df2:
        dfr1 = pd.DataFrame([j], columns = ['Phone_no'])
        dfr1['Phone_no'] = dfr1['Phone_no'].str.cat(sep=', ')
        dfr1.drop_duplicate(inplace = True)
    for k in df3:
        dfr2 = pd.DataFrame([k], columns = ['Email'])

dfc = pd.concat([dfr, dfr1, dfr2], axis = 1)

This is the result I'm getting:-

enter image description here

But, since the regular expressions is extracting 3 values for Phone no, namely,

enter image description here

The result should be like this:- enter image description here

I've no clue how to solve this issue, would be great if you guys could help me.

Upvotes: 2

Views: 346

Answers (1)

quest
quest

Reputation: 3926

This should do:

import pandas as pd
from commonregex import CommonRegex
from urllib.request import urlopen
from bs4 import BeautifulSoup

url = 'https://www.thetaxshopinc.com/pages/contact-tax-accountant-brampton'
html = urlopen(url)
soup = BeautifulSoup(html, 'lxml')

dict_data = {'address':[], 'phone_no': [], 'email': []
}

crex = CommonRegex()

for link in soup.find_all('p'):

    str_add = crex.street_addresses(link.get_text())
    phone = crex.phones(link.get_text())
    email = crex.emails(link.get_text())

    if str_add:
        dict_data['address'].append(str_add[0])
    if phone:
        dict_data['phone_no'].append(', '.join(phone))
    if email:
        dict_data['email'].append(email[0]) 

df = pd.DataFrame(dict_data)

Upvotes: 2

Related Questions