Santi
Santi

Reputation: 13

How to scrape Table in several webpages using Beautiful Soup and Pandas?

I'm new to python and bs4. I've tried for hours to scrape a table in several webpages using beautiful soup and pandas. When I scrape 2 pages it worked but when I attempted scraping all of the 13 webpages, I ran into trouble. When I changed the range function from 2 to 13, the code does not produce a DF nor a CSV file. What am I doing incorrectly?

dfs=[]

for page in range(13):
    http = "http://websitexample/Records?year=2020&page={}".format(page+1)

    url = requests.get(http)
    soup = BeautifulSoup(url.text, "lxml")
    table = soup.find('table')
    df_list = pd.read_html(url.text)
    df = pd.concat(df_list)
    
    links = []
    for tr in table.find_all("tr"): 
        trs = tr.find_all("td")
        for each in trs:
            try: 
                link = each.find('a')['href']
                links.append(link)
            except: 
                pass
    
    df['Link']= links
    dfs.append(df)

final_df = pd.concat(dfs)
final_df.to_csv("NewFileAll13.csv",index=False,encoding='utf-8-sig')

I get the error message:

Value error: length of values does not match length of index.

I would highly appreciate any advice provided. Thank you!

Upvotes: 1

Views: 64

Answers (1)

Andrej Kesely
Andrej Kesely

Reputation: 195438

To download all data + links from all pages, you can use this example:

import requests
import pandas as pd
from bs4 import BeautifulSoup

url = 'http://reactwarn.floridajobs.org/WarnList/Records'
params = {
    'year': 2020,
    'page': 1
}

all_data = []
for params['page'] in range(1, 14):
    print('Page {}..'.format(params['page']))

    soup = BeautifulSoup(requests.get(url, params=params).content, 'lxml')

    for row in soup.select('tbody tr'):
        tds = [td.get_text(strip=True, separator='\n') for td in row.select('td')][:-1] + [row.a['href'] if row.a else '']
        all_data.append(tds)

df = pd.DataFrame(all_data, columns=['Company Name', 'State Notification Date', 'Layoff Date', 'Employees Affected', 'Industry', 'Attachment'])
print(df)
df.to_csv('data.csv', index=False)

Prints:

                                           Company Name  ...                                         Attachment
0     TrueCore Behavioral Solutions\n5050 N.E. 168th...  ...  /WarnList/Download?file=%5C%5Cdeo-wpdb005%5CRe...
1     Cuba Libre Orlando, LLC t/a Cuba Libre Restaur...  ...  /WarnList/Download?file=%5C%5Cdeo-wpdb005%5CRe...
2     Hyatt Regency Orlando\n9801 International Dr.O...  ...  /WarnList/Download?file=%5C%5Cdeo-wpdb005%5CRe...
3     ABM. Inc.\nNova Southeastern University3301 Co...  ...  /WarnList/Download?file=%5C%5Cdeo-wpdb005%5CRe...
4     Newport Beachside Resort\n16701 Collins Avenue...  ...  /WarnList/Download?file=%5C%5Cdeo-wpdb005%5CRe...
...                                                 ...  ...                                                ...
1251  P.F. Chang's China Bistro\n3597 S.W. 32nd Ct.,...  ...  /WarnList/Download?file=%5C%5Cdeo-wpdb005%5CRe...
1252  P.F. Chang's China Bistro\n11361 N.W. 12th St....  ...  /WarnList/Download?file=%5C%5Cdeo-wpdb005%5CRe...
1253  P.F. Chang's China Bistro\n8888 S.W. 136th St....  ...  /WarnList/Download?file=%5C%5Cdeo-wpdb005%5CRe...
1254  P.F. Chang's China Bistro\n17455 Biscayne Blvd...  ...  /WarnList/Download?file=%5C%5Cdeo-wpdb005%5CRe...
1255  Grand Hyatt Tampa Bay\n2900 Bayport DriveTAMPA...  ...  /WarnList/Download?file=%5C%5Cdeo-wpdb005%5CRe...

[1256 rows x 6 columns]

and saves data.csv (screenshot from LibreOffice):

enter image description here

Upvotes: 1

Related Questions