user3290972
user3290972

Reputation: 11

Python + BeautifulSoup Exporting to CSV

I'm having a bit of trouble automatically scraping data in a table from a Wikipedia article. First I was getting an encoding error. I specified UTF-8 and the error went away, but the scraped data doesn't display a lot of the characters correctly. You will be able to tell from the code that I am a complete newbie:

from bs4 import BeautifulSoup
import urllib2

wiki = "http://en.wikipedia.org/wiki/Anderson_Silva"
header = {'User-Agent': 'Mozilla/5.0'} #Needed to prevent 403 error on Wikipedia
req = urllib2.Request(wiki,headers=header)
page = urllib2.urlopen(req)
soup = BeautifulSoup(page)

Result = ""
Record = ""
Opponent = ""
Method = ""
Event = ""
Date = ""
Round = ""
Time = ""
Location = ""
Notes = ""

table = soup.find("table", { "class" : "wikitable sortable" })

f = open('output.csv', 'w')

for row in table.findAll("tr"):
    cells = row.findAll("td")
    #For each "tr", assign each "td" to a variable.
    if len(cells) == 10:
        Result = cells[0].find(text=True)
        Record = cells[1].find(text=True)
        Opponent = cells[2].find(text=True)
        Method = cells[3].find(text=True)
        Event = cells[4].find(text=True)
        Date = cells[5].find(text=True)
        Round = cells[6].find(text=True)
        Time = cells[7].find(text=True)
        Location = cells[8].find(text=True)
        Notes = cells[9].find(text=True)

        write_to_file = Result + "," + Record + "," + Opponent + "," + Method + "," + Event + "," + Date + "," + Round + "," + Time + "," + Location + "\n"
        write_to_unicode = write_to_file.encode('utf-8')
        print write_to_unicode
        f.write(write_to_unicode)

f.close()

Upvotes: 1

Views: 4965

Answers (1)

Hai Vu
Hai Vu

Reputation: 40688

As pswaminathan pointed out, using the csv module will help greatly. Here is how I do it:

table = soup.find('table', {'class': 'wikitable sortable'})
with open('out2.csv', 'w') as f:
    csvwriter = csv.writer(f)
    for row in table.findAll('tr'):
        cells = [c.text.encode('utf-8') for c in row.findAll('td')]
        if len(cells) == 10: 
            csvwriter.writerow(cells)

Discussion

  • Using the csv module, I created a csvwriter object connected to my output file.
  • By using the with command, I don't need to worry about closing the output file after done: it will be closed after the with block.
  • In my code, cells is a list of UTF8-encoded text extracted from the td tags within a tr tag.
  • I used the construct c.text, which is more concise than c.find(text=True).

Upvotes: 1

Related Questions