Frank
Frank

Reputation: 745

In python, reading multiple CSV's, with different headers, into one dataframe

I have dozens of csv files with similar (but not always exactly the same) headers. For instance, one has:

Year Month Day Hour Minute Direct Diffuse D_Global D_IR Zenith Test_Site

One has:

Year Month Day Hour Minute Direct Diffuse2 D_Global D_IR U_Global U_IR Zenith Test_Site

(Notice one lacks "U_Global" and "U_IR", the other has "Diffuse2" instead of "Diffuse")

I know how to pass multiple csv's into my script, but how do I have the csv's only pass values to columns in which they currently have values? And perhaps pass "Nan" to all other columns in that row.

Ideally I'd have something like:

'Year','Month','Day','Hour','Minute','Direct','Diffuse','Diffuse2','D_Global','D_IR','U_Global','U_IR','Zenith','Test_Site'
1992,1,1,0,3,-999.00,-999.00,"Nan",-999.00,-999.00,"Nan","Nan",122.517,"BER"
2013,5,30,15,55,812.84,270.62,"Nan",1078.06,-999.00,"Nan","Nan",11.542,"BER"
2004,9,1,0,1,1.04,79.40,"Nan",78.67,303.58,61.06,310.95,85.142,"ALT"
2014,12,1,0,1,0.00,0.00,"Nan",-999.00,226.95,0.00,230.16,115.410,"ALT"

The other caveat, is that this dataframe needs to be appended to. It needs to remain as multiple csv files are passed into it. I think I'll probably have it write out to it's own csv at the end (it's eventually going to NETCDF4).

Upvotes: 4

Views: 4277

Answers (3)

MaxU - stand with Ukraine
MaxU - stand with Ukraine

Reputation: 210852

Assuming you have the following CSV files:

test1.csv:

year,month,day,Direct 
1992,1,1,11
2013,5,30,11
2004,9,1,11

test2.csv:

year,month,day,Direct,Direct2
1992,1,1,21,201
2013,5,30,21,202
2004,9,1,21,203

test3.csv:

year,month,day,File3
1992,1,1,text1
2013,5,30,text2
2004,9,1,text3
2016,1,1,unmatching_date

Solution:

import glob
import pandas as pd

files = glob.glob(r'd:/temp/test*.csv')

def get_merged(files, **kwargs):
    df = pd.read_csv(files[0], **kwargs)
    for f in files[1:]:
        df = df.merge(pd.read_csv(f, **kwargs), how='outer')
    return df

print(get_merged(files))

Output:

   year  month  day  Direct   Direct  Direct2            File3
0  1992      1    1     11.0    21.0    201.0            text1
1  2013      5   30     11.0    21.0    202.0            text2
2  2004      9    1     11.0    21.0    203.0            text3
3  2016      1    1      NaN     NaN      NaN  unmatching_date

UPDATE: usual idiomatic pd.concat(list_of_dfs) solution wouldn't work here, because it's joining by indexes:

In [192]: pd.concat([pd.read_csv(f) for f in glob.glob(file_mask)], axis=0, ignore_index=True)
Out[192]:
   Direct  Direct   Direct2            File3  day  month  year
0     NaN     11.0      NaN              NaN    1      1  1992
1     NaN     11.0      NaN              NaN   30      5  2013
2     NaN     11.0      NaN              NaN    1      9  2004
3    21.0      NaN    201.0              NaN    1      1  1992
4    21.0      NaN    202.0              NaN   30      5  2013
5    21.0      NaN    203.0              NaN    1      9  2004
6     NaN      NaN      NaN            text1    1      1  1992
7     NaN      NaN      NaN            text2   30      5  2013
8     NaN      NaN      NaN            text3    1      9  2004
9     NaN      NaN      NaN  unmatching_date    1      1  2016

In [193]: pd.concat([pd.read_csv(f) for f in glob.glob(file_mask)], axis=1, ignore_index=True)
Out[193]:
       0    1     2     3       4    5     6     7      8     9   10  11               12
0  1992.0  1.0   1.0  11.0  1992.0  1.0   1.0  21.0  201.0  1992   1   1            text1
1  2013.0  5.0  30.0  11.0  2013.0  5.0  30.0  21.0  202.0  2013   5  30            text2
2  2004.0  9.0   1.0  11.0  2004.0  9.0   1.0  21.0  203.0  2004   9   1            text3
3     NaN  NaN   NaN   NaN     NaN  NaN   NaN   NaN    NaN  2016   1   1  unmatching_date

or using index_col=None explicitly:

In [194]: pd.concat([pd.read_csv(f, index_col=None) for f in glob.glob(file_mask)], axis=0, ignore_index=True)
Out[194]:
   Direct  Direct   Direct2            File3  day  month  year
0     NaN     11.0      NaN              NaN    1      1  1992
1     NaN     11.0      NaN              NaN   30      5  2013
2     NaN     11.0      NaN              NaN    1      9  2004
3    21.0      NaN    201.0              NaN    1      1  1992
4    21.0      NaN    202.0              NaN   30      5  2013
5    21.0      NaN    203.0              NaN    1      9  2004
6     NaN      NaN      NaN            text1    1      1  1992
7     NaN      NaN      NaN            text2   30      5  2013
8     NaN      NaN      NaN            text3    1      9  2004
9     NaN      NaN      NaN  unmatching_date    1      1  2016

In [195]: pd.concat([pd.read_csv(f, index_col=None) for f in glob.glob(file_mask)], axis=1, ignore_index=True)
Out[195]:
       0    1     2     3       4    5     6     7      8     9   10  11               12
0  1992.0  1.0   1.0  11.0  1992.0  1.0   1.0  21.0  201.0  1992   1   1            text1
1  2013.0  5.0  30.0  11.0  2013.0  5.0  30.0  21.0  202.0  2013   5  30            text2
2  2004.0  9.0   1.0  11.0  2004.0  9.0   1.0  21.0  203.0  2004   9   1            text3
3     NaN  NaN   NaN   NaN     NaN  NaN   NaN   NaN    NaN  2016   1   1  unmatching_date

The following more idiomatic solution works, but it changes original order of columns and rows / data:

In [224]: dfs = [pd.read_csv(f, index_col=None) for f in glob.glob(r'd:/temp/test*.csv')]
     ...:
     ...: common_cols = list(set.intersection(*[set(x.columns.tolist()) for x in dfs]))
     ...:
     ...: pd.concat((df.set_index(common_cols) for df in dfs), axis=1).reset_index()
     ...:
Out[224]:
   month  day  year  Direct   Direct  Direct2            File3
0      1    1  1992     11.0    21.0    201.0            text1
1      1    1  2016      NaN     NaN      NaN  unmatching_date
2      5   30  2013     11.0    21.0    202.0            text2
3      9    1  2004     11.0    21.0    203.0            text3

Upvotes: 6

Maarten Fabré
Maarten Fabré

Reputation: 7058

Can't pandas take care of this automagically?

http://pandas.pydata.org/pandas-docs/stable/merging.html#concatenating-using-append

If your indices overlap, don't forget to add 'ignore_index=True'

Upvotes: 4

Loïc
Loïc

Reputation: 11943

First, run through all the files to define the common headers :

csv_path = './csv_files'
csv_separator = ','

full_headers = []
for fn in os.listdir(csv_path):
    with open(fn, 'r') as f:
        headers = f.readline().split(csv_separator)
        full_headers += full_headers + list(set(full_headers) - set(headers))

Then write your header line into your output file, and run again through all the files to fill it.

You can use : csv.DictReader(open('myfile.csv')) to be able to match the headers to their designated column simply.

Upvotes: 1

Related Questions