Reputation: 3747
So I have a few CSV files I'm trying to work with, but some of them have multiple columns with the same name.
For example I could have a csv like this:
ID Name a a a b b
1 test1 1 NaN NaN "a" NaN
2 test2 NaN 2 NaN "a" NaN
3 test3 2 3 NaN NaN "b"
4 test4 NaN NaN 4 NaN "b"
loading into pandasis giving me this:
ID Name a a.1 a.2 b b.1
1 test1 1 NaN NaN "a" NaN
2 test2 NaN 2 NaN "a" NaN
3 test3 2 3 NaN NaN "b"
4 test4 NaN NaN 4 NaN "b"
What I would like to do is merge those same name columns into 1 column (if there are multiple values keeping those values separate) and my ideal output would be this
ID Name a b
1 test1 "1" "a"
2 test2 "2" "a"
3 test3 "2;3" "b"
4 test4 "4" "b"
So wondering if this is possible?
Upvotes: 10
Views: 42233
Reputation: 31
Expanding on one of the previous answers: The columns come in from read_csv with suffixes on the columns to make them unique, as you've noticed a.0, a.1, a.2 etc.
You may need to pass a function to group_by in order to cater for this e.g.:
df = pd.read_csv('data.csv') #csv file with multiple columns of the same name
#function to join columns if column is not null
def sjoin(x): return ';'.join(x[x.notnull()].astype(str))
#function to ignore the suffix on the column e.g. a.1, a.2 will be grouped together
def groupby_field(col):
parts = col.split('.')
return '{}'.format(parts[0])
df = df.groupby(groupby_field, axis=1,).apply(lambda x: x.apply(sjoin, axis=1))
Upvotes: 3
Reputation: 306
If you wanted to patch the Dataframe, you could do:
# consolidated columns, replacing instead of joining by ;
s_fixed_a = df['a'].fillna(df['a.1']).fillna(df['a.2'])
s_fixed_b = df['b'].fillna(df['b.1'])
# create new df
df_resulting = df[['Id', 'Name']].merge(s_fixed_a, left_index=True, right_index=True).merge(s_fixed_b, left_index=True, right_index=True)
Upvotes: 1
Reputation: 68256
Of course DSM and CT Zhu have marvelously concise answers that utilize a lot built in features of Python in general and dataframe in particular. Here's something a little -- [cough] -- verbose.
def myJoiner(row):
newrow = []
for r in row:
if not pandas.isnull(r):
newrow.append(str(r))
return ';'.join(newrow)
def groupCols(df, key):
columns = df.select(lambda col: key in col, axis=1)
joined = columns.apply(myJoiner, axis=1)
joined.name = key
return pandas.DataFrame(joined)
import pandas
from io import StringIO # python 3.X
#from StringIO import StringIO #python 2.X
data = StringIO("""\
ID Name a a a b b
1 test1 1 NaN NaN "a" NaN
2 test2 NaN 2 NaN "a" NaN
3 test3 2 3 NaN NaN "b"
4 test4 NaN NaN 4 NaN "b"
""")
df = pandas.read_table(data, sep='\s+')
df.set_index(['ID', 'Name'], inplace=True)
AB = groupCols(df, 'a').join(groupCols(df, 'b'))
print(AB)
Which gives me:
a b
ID Name
1 test1 1.0 a
2 test2 2.0 a
3 test3 2.0;3.0 b
4 test4 4.0 b
Upvotes: 5
Reputation: 54400
Probably it is not a good idea to have duplicated column names, but it will work:
In [72]:
df2=df[['ID', 'Name']]
df2['a']='"'+df.T[df.columns.values=='a'].apply(lambda x: ';'.join(["%i"%item for item in x[x.notnull()]]))+'"' #these columns are of float dtype
df2['b']=df.T[df.columns.values=='b'].apply(lambda x: ';'.join([item for item in x[x.notnull()]])) #these columns are of objects dtype
print df2
ID Name a b
0 1 test1 "1" "a"
1 2 test2 "2" "a"
2 3 test3 "2;3" "b"
3 4 test4 "4" "b"
[4 rows x 4 columns]
Upvotes: 5
Reputation: 353604
You could use groupby
on axis=1
, and experiment with something like
>>> def sjoin(x): return ';'.join(x[x.notnull()].astype(str))
>>> df.groupby(level=0, axis=1).apply(lambda x: x.apply(sjoin, axis=1))
ID Name a b
0 1 test1 1.0 a
1 2 test2 2.0 a
2 3 test3 2.0;3.0 b
3 4 test4 4.0 b
where instead of using .astype(str)
, you could use whatever formatting operator you wanted.
Upvotes: 16