Reputation: 35
import pandas as pd
from csv import DictReader
import glob
with open(path) as f:
DictReader_obj = csv.DictReader(f)
for item in DictReader_obj:
reader = dict(item)
print(reader)
That code works fine for a single csv, now I am trying to loop through various related csv's
to initialize list_df = []
and tried it with list_df= {}
and even df = [{ }]
for csvfile in csvfiles:
with open(csvfile, 'r') as f:
DictReader_obj = DictReader(f, fieldnames = ['Symbols', 'Date', 'Open', 'High', 'Low', `'Close', 'Volumn']
for item in DictReader_obj:
reader = dict(item)
list_df.append(reader)
is only giving the contents of one csv file. My type(reader) is dict and my type(list_df) is list.
What am I missing? Any suggestions as I tried my due diligence researching and reading and consider myself learning this art.
Was expecting my contents of all my CSVs in a dictionary. I understand I can use yFinance to grab info, but I have CSVs and prefer to have them local to avoid any yFinance threshold bans
example the output expected:
A Date Open High Close Volumn
xx/xx/xxxx xxx.xx xxx.xx xxx.xx xxxxxxxxxx
xx/xx/xxxx xxxx.xx xxxx.xx xxxx.xx xxxxxxxxxx
AA xx/xx/xxxx xxxx.xx xxxx.xx xxxx.xx xxxxxxxxxx
xx/xx/xxxx xxx.xx xxx.xx xxx.xx xxxxxxxxxx
xx/xx/xxxx xxxx.xx xxxx.xx xxxx.xx xxxxxxxxxx
AAPL xx/xx/xxxx xxxx.xx xxxx.xx xxxx.xx xxxxxxxxxx
xx/xx/xxxx xxxx.xx xxxx.xx xxxx.xx xxxxxxxxxx
xx/xx/xxxx xxxx.xx xxxx.xx xxxx.xx xxxxxxxxxx
....
Upvotes: 1
Views: 3167
Reputation: 65
You could use pandas.concat to aggregate all your csv in a single dataframe. In the following path_files is a list of path to your csv files. This has the advantages to keep your data sources in the index if needed.
import pandas as pd
reader = (
(pd.read_csv(path_file_i) from path_file_i in path_files),
axis=0,
keys=path_files
)
If you want to keep it as a dictionary, you need to initiate reader as a dict and then assign your csv to an item in your dictionary.
reader = {}
for item in DictReader_obj: `
reader[item] = ...
Upvotes: 3