Reputation: 484
I have a dataframe which looks like following:
index col1 col2
1 'A' 'B'
300 'A' 'B'
301 'A' 'B'
400 'A' 'B'
510 'A' 'B'
511 'C' 'D'
512 'E' 'F'
1000 'Q' 'P'
1001 'Q' 'R'
which was slices from another dataframe. I need to group all the rows which have consecutive indices, like 300 and 301 for example and I need to group the values, if they are different like following:
index col1 col2
1 'A' 'B'
300, 3001 'A' 'B'
400 'A' 'B'
510, 511, 512 ['A', 'C', 'E'] ['B', 'D', 'F']
1000, 1001 'Q' ['P', 'R']
so in case of the 300 and 301
the values are the same so I just keep them but in the case of 510, 511, 512
the values are different so I have to make a list of them and for 1000 and 1001
the values for col1 are the same so I keep them but the values for col2 are different so I make a list of them
Any help is much appreciated, thank you!!
Upvotes: 3
Views: 54
Reputation: 862641
Use:
#convert index to column if necessary
df = df.reset_index()
#remove duplicates with sets and if length is 1 add scalar
f = lambda x: list(set(x)) if len(set(x)) > 1 else x.iat[0]
#for index column use join with cast to strings
d = {'index': lambda x: ', '.join(x.astype(str)), 'col1':f, 'col2':f }
#create consecutive groups
g = df['index'].astype(str).str[0]
s = g.ne(g.shift()).cumsum()
#aggregtae by fisrt value of `index` column with dictionary
df = df.groupby(s).agg(d).reset_index(drop=True)
print (df)
index col1 col2
0 1 'A' 'B'
1 300, 301 'A' 'B'
2 400 'A' 'B'
3 510, 511, 512 ['C', 'A', 'E'] ['D', 'B', 'F']
4 1000, 1001 'Q' ['R', 'P']
Upvotes: 2