user7101631
user7101631

Reputation: 51

Filter Pandas DataFrame using GroupBy with Count of Certain Value

I would like to filter a pandas DataFrame to rows where that particular row's group has a minimum count of a specific column value.

For example, return only the rows/groups of df where the ['c2','c3'] group has at least 2 rows with 'c1' value of 1:

df = pd.DataFrame({'c1':[0,1,0,1,1,0], 'c2':[0,0,0,1,1,1], 'c3':[0,0,0,1,1,1]})

The result should return only the rows with indices 3,4,5 since only the [c2,c3] = [1,1] group has at least 2 rows with 'c1' value of 1.

df.groupby(['c2','c3']).filter(lambda x: x['c1'].count() >= 2)

does not return the required result. I need the count to apply specifically to the count of 1s, not just any value of 'c1'.

The following works but I am not sure how to make it more pythonic:

s = df.groupby(['c2','c3']).apply(lambda x: x[x['c1']==1].count() >= 2).all(axis=1)
df = df.reset_index().set_index(['c2','c3']).loc[s[s].index].reset_index().set_index(['index'])    

Upvotes: 4

Views: 3181

Answers (2)

BENY
BENY

Reputation: 323226

May using groupby + merge

s=df.groupby(['c2','c3']).c1.sum().ge(2)
s[s].index.to_frame().reset_index(drop=True).merge(df,how='left')
   c2  c3  c1
0   1   1   1
1   1   1   1
2   1   1   0

Upvotes: 1

ALollz
ALollz

Reputation: 59519

Using groupby + transform to sum a Boolean Series, which we use to mask the original DataFrame.

m = df['c1'].eq(1).groupby([df['c2'], df['c3']]).transform('sum').ge(2)

# Alterntively assign the column
#m = df.assign(to_sum = df.c1.eq(1)).groupby(['c2', 'c3']).to_sum.transform('sum').ge(2) 

df.loc[m]
#   c1  c2  c3
#3   1   1   1
#4   1   1   1
#5   0   1   1

With filter, count is not the correct logic. Use == (or .eq()) to check where 'c1' is equal to the specific value. Sum the Boolean Series and check that there are at least 2 such occurrences per group for your filter.

df.groupby(['c2','c3']).filter(lambda x: x['c1'].eq(1).sum() >= 2)
#   c1  c2  c3
#3   1   1   1
#4   1   1   1
#5   0   1   1

While not noticeable for a small DataFrame, filter with a lambda is horribly slow as the number of groups grows. transform is fast:

import numpy as np
np.random.seed(123)
df = pd.DataFrame({'c1':np.random.randint(1,100,1000), 'c2':np.random.randint(1,100,1000), 
                   'c3':np.random.choice([1,0], 1000)})

%%timeit
m = df['c1'].eq(1).groupby([df.c3, df.c3]).transform('sum').ge(2)
df.loc[m]
#5.21 ms ± 15.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%timeit df.groupby(['c2','c3']).filter(lambda x: x['c1'].eq(1).sum() >= 2)
#124 ms ± 714 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Upvotes: 3

Related Questions