Reputation: 2293
So I'm new to Python and my goal is to convert the different large dbf files into csv files. I have looked at different code and don't understand a lot of the parts. The below code runs for data1.dbf but not data2.dbf. I get an error stating:
UnicodeDecodeError: 'ascii' codec can't decode byte...
I did look into encoding for the dbfread but it says that encoding is not needed...The other part I need is to get these Large dbfs into csv. If I use the dbfread I'm not knowledgeable about the code to place it into the csv file.
import sys
import csv
from dbfread import DBF
file = "C:/Users/.../Documents/.../data2.dbf"
table = DBF(file)
writer = csv.writer(sys.stdout)
writer.writerow(table.field_names)
for record in table:
writer.writerow(list(record.values()))
Here is another try using dbf library.
import dbf # instead of dbfpy
for table in sys.argv[1:]:
dbf.export(table, header = True)
This one is ran from the command prompted with the statement "python dbf2csv_EF.py data1.dbf" produces a different error when attempting both my dbf files. The error is:
...AttributeError: 'str' object has no attribute '_meta'
Upvotes: 2
Views: 2161
Reputation: 168903
Since you're on Windows and are attempting to write to sys.stdout
, I think (part of) your first problem is that the Windows console is not very Unicode savvy, and you should write to files instead.
Assuming that's the case, something like
import sys
import csv
from dbfread import DBF
for file in sys.argv[1:]:
csv_file = file + ".csv"
table = DBF(file, encoding="UTF-8")
with open(csv_file, "w") as outf:
writer = csv.writer(outf)
writer.writerow(table.field_names)
for record in table:
writer.writerow(list(record.values()))
might do the trick - running the script with e.g. "python thatscript.py foo.dbf" should have you end up with "foo.dbf.csv".
Upvotes: 3