Yog
Yog

Reputation: 825

How to groupby a column and count the values on condition using python pandas?

Input:

df=pd.DataFrame({
    'BusId':['abc1','abc2','abc3','abc1','abc2','abc4'],
    "Fair":[5,6,7,10,5,4]
})

Need to group by BusId and need the following output

Output:

BusId   Count of Fair>=5    Count of Fair>=10
abc1    2                         1
abc2    1                         0
abc3    1                         0
abc4    0                         0

Thanks for the help.

Upvotes: 0

Views: 957

Answers (3)

cs95
cs95

Reputation: 403050

You can use pd.cut to avoid hardcoding the ranges. Just cut your data and then apply a cumsum:

binned_data = pd.cut(df.Fair, 
                     bins=[0, 5, 10, np.inf], 
                     labels=['>=0', '>=5', '>=10'], 
                     right=False)
df = (pd.get_dummies(binned_data)
        .sort_index(axis=1, ascending=False)
        .cumsum(1)
        .groupby(df.BusId, sort=False)
        .sum()
        .iloc[:, 1::-1]))

df
       >=5  >=10
BusId           
abc1     2     1
abc2     2     0
abc3     1     0
abc4     0     0

Upvotes: 2

user3483203
user3483203

Reputation: 51175

Using agg on your series with two helper functions to count the values above each of your thresholds.

However, aggregation on a Series as I am doing here will be deprecated in a future version of pandas.

df.groupby('BusId').Fair.agg({
    'gt5': lambda x: (x>=5).sum(),
    'gt10': lambda x: (x>=10).sum()
})

       gt5  gt10
BusId
abc1     2     1
abc2     2     0
abc3     1     0
abc4     0     0

You could also remove the use of lambda:

out = df.assign(gt5=df.Fair.ge(5), gt10=df.Fair.ge(10))
out.groupby('BusId').agg({'gt5': 'sum', 'gt10': 'sum'}).astype(int)

       gt5  gt10
BusId
abc1     2     1
abc2     2     0
abc3     1     0
abc4     0     0

The second approach will be slightly faster:

%%timeit
df.groupby('BusId').Fair.agg({
    'gt5': lambda x: (x>=5).sum(),
    'gt10': lambda x: (x>=10).sum()
})

5.05 ms ± 69 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%%timeit
out = df.assign(gt5=df.Fair.ge(5), gt10=df.Fair.ge(10))
out.groupby('BusId').agg({'gt5': 'sum', 'gt10': 'sum'}).astype(int)

3.76 ms ± 44.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Upvotes: 2

KenHBS
KenHBS

Reputation: 7174

To avoid aggregation, you could also use apply instead. The essence is the same as in the answer by user3483203:

df.groupby('BusId').apply(lambda x: pd.Series(
               dict(five=(x.Fair >=  5).sum(),
                     ten=(x.Fair >= 10).sum())))
# BusId              five              ten     
# abc1                  2                1
# abc2                  2                0
# abc3                  1                0
# abc4                  0                0

Upvotes: 1

Related Questions