tardos93
tardos93

Reputation: 233

Python web-scraping into csv

I did a web-scraping, and I get a table which one I want to write into CSV.

When I try it, I get this message :

"Traceback (most recent call last):

File "C:/Python27/megoldas3.py", line 27, in <module>
file.write(bytes(header,encoding="ascii",errors="ignore")) TypeError:
str() takes at most 1 argument (3 given)"

What's wrong with this code? I use Python 2.7.13.

import urllib2
from bs4 import BeautifulSoup
import csv
import os

out=open("proba.csv","rb")
data=csv.reader(out)

def make_soup(url):
    thepage = urllib2.urlopen(url)
    soupdata = BeautifulSoup(thepage, "html.parser")
    return soupdata

maindatatable=""
soup = make_soup("https://www.mnb.hu/arfolyamok")

for record in soup.findAll('tr'):
    datatable=""
    for data in record.findAll('td'):
        datatable=datatable+","+data.text
    maindatatable = maindatatable + "\n" + datatable[1:]

header = "Penznem,Devizanev,Egyseg,Penznemforintban"
print maindatatable

file = open(os.path.expanduser("proba.csv"),"wb")
file.write(bytes(header,encoding="ascii",errors="ignore"))
file.write(bytes(maindatatable,encoding="ascii",errors="ignore"))

Upvotes: 0

Views: 968

Answers (4)

Arun
Arun

Reputation: 1179

I think this will work for you. Just remove encoding="ascii",errors="ignore" from bytes

# import re
# data = [['revenue', 'margins'], ['revenue', 'liquidity'], ['revenue', 'ratio'], ['revenue', 'pricing'], ['revenue', 'assets'], ['revenue', 'recent trends']]
# with open('a.txt') as f:
#   txt = f.read()
#   for d in data:
#       c1 = re.findall(d[0],txt)
#       c2 = re.findall(d[1],txt)
#       if c1 and c2:
#           print {c1[0]:len(c1),c2[0]:len(c2)}


import urllib2
from bs4 import BeautifulSoup
import csv
import os

out=open("proba.csv","rb")
data=csv.reader(out)

def make_soup(url):
    thepage = urllib2.urlopen(url)
    soupdata = BeautifulSoup(thepage, "html.parser")
    return soupdata

maindatatable=""
soup = make_soup("https://www.mnb.hu/arfolyamok")

for record in soup.findAll('tr'):
    datatable=""
    for data in record.findAll('td'):
        datatable=datatable+","+data.text
    maindatatable = maindatatable + "\n" + datatable[1:]

header = "Penznem,Devizanev,Egyseg,Penznemforintban"
print maindatatable

file = open(os.path.expanduser("proba.csv"),"wb")
file.write(header.encode('utf-8').strip())
file.write(maindatatable.encode('utf-8').strip())

Upvotes: 0

stx101
stx101

Reputation: 271

This should work

file.write(bytes(header.encode('ascii','ignore')))
file.write(bytes(maindatatable.encode('ascii','ignore')))

Upvotes: 0

BoboDarph
BoboDarph

Reputation: 2891

How about encoding your strings before trying to write them?

utf8_str = maindatatable.encode('utf8')
file.write(utf8_str)

Also don't forget to file.close()

Upvotes: 1

BoarGules
BoarGules

Reputation: 16942

You have misplaced parens. encoding and errors are parameters of file.write() not bytes().

file.write(bytes(header),encoding="ascii",errors="ignore")

Upvotes: 1

Related Questions