Reputation: 6255
I am trying to create a pandas DataFrame
and it works fine for a single file. If I need to build it for multiple files which have the same data structure. So instead of single file name I have a list of file names from which I would like to create the DataFrame
.
Not sure what's the way to append to current DataFrame
in pandas or is there a way for pandas to suck a list of files into a DataFrame
.
Upvotes: 20
Views: 31784
Reputation: 1850
Here is a simple solution that avoids using a list to hold all the data frames, if you don't need them in a list, it creates a dataframe for each file, you can then pd.concat
them.
import fnmatch
# get the CSV files only
files = fnmatch.filter(os.listdir('.'), '*.csv')
files
Output which is now a list of the names:
['Feedback Form Submissions 1.21-1.25.22.csv',
'Feedback Form Submissions 1.21.22.csv',
'Feedback Form Submissions 1.25-1.31.22.csv']
Now create a simple list of new names to make working with them easier:
# use a simple format
names = []
for i in range(0,len(files)):
names.append('data' + str(i))
names
['data0', 'data1', 'data2']
You can use any list of names that you want. The next step take the file names and the list of names and then assign them to the names.
# i is the incrementor for the list of names
i = 0
# iterate through the file names
for file in files:
# make an empty dataframe
df = pd.DataFrame()
# load the first file in
df = pd.read_csv(file, low_memory=False)
# get the first name from the list, this will be a string
new_name = names[i]
# assign the string to the variable and assign it to the dataframe
locals()[new_name] = df.copy()
# increment the list of names
i = i + 1
You now have 3 separate dataframes named data0, data1, data2, and do commands like
data2.info()
Upvotes: 0
Reputation: 1086
import os
import pandas as pd
data = []
thisdir = os.getcwd()
for r, d, f in os.walk(thisdir):
for file in f:
if ".docx" in file:
data.append(file)
df = pd.DataFrame(data)
Upvotes: 0
Reputation: 105
I might try to concatenate the files before feeding them to pandas. If you're in Linux or Mac you could use cat
, otherwise a very simple Python function could do the job for you.
Upvotes: 1
Reputation: 123
Potentially horribly inefficient but...
Why not use read_csv
, to build two (or more) dataframes, then use join to put them together?
That said, it would be easier to answer your question if you provide some data or some of the code you've used thus far.
Upvotes: 3
Reputation: 31073
The pandas concat
command is your friend here. Lets say you have all you files in a directory, targetdir. You can:
`
import os
import pandas as pd
#list the files
filelist = os.listdir(targetdir)
#read them into pandas
df_list = [pd.read_table(file) for file in filelist]
#concatenate them together
big_df = pd.concat(df_list)
Upvotes: 41
Reputation: 7358
Are these files in a csv format. You could use the read_csv. http://pandas.sourceforge.net/io.html
Once you have read the files and save it in two dataframes, you could merge the two dataframes or add additional columns to one of the two dataframes( assuming common index). Pandas should be able to fill in missing rows.
Upvotes: 0