Reputation: 789
For a given dataframe with m
columns (lets assume m
=10), with in each row, I am trying to find top n
column values (lets assume n
=2). After finding these top n
values for each row, I would like to assign the remaining column values, m
- n
in total, in the row to 0.
For an example, starting with the dataframe of values mentioned in the first table, I am trying to create a representation of first table with the filtering options discussed earlier. If more than n
columns have same value, lower column index number is given preference
| col_A | col_B | col_C | col_D | col_E |
|-------|-------|-------|-------|-------|
| 0.1 | 0.1 | 0.3 | 0.4 | 0.5 |
| 0.06 | 0.1 | 0.1 | 0.1 | 0.01 |
| 0.24 | 0.24 | 0.24 | 0.24 | 0.24 |
| 0.20 | 0.25 | 0.30 | 0.12 | 0.02 |
| col_A | col_B | col_C | col_D | col_E |
|-------|-------|-------|-------|-------|
| 0 | 0 | 0 | 0.4 | 0.5 |
| 0 | 0.1 | 0.1 | 0 | 0 |
| 0.24 | 0.24 | 0 | 0 | 0 |
| 0 | 0.25 | 0.3 | 0 | 0 |
Is there any easier way to have this implementation. A vectorized format can help in dramatically reducing the processing time on large dataframes
Thanks
Upvotes: 3
Views: 713
Reputation: 863801
First idea is compare top N values per rows by Series.nlargest
and the nset values by DataFrame.where
:
N = 2
df = df.where(df.apply(lambda x: x.eq(x.nlargest(N)), axis=1), 0)
print (df)
col_A col_B col_C col_D col_E
0 0.00 0.00 0.0 0.4 0.5
1 0.00 0.10 0.1 0.0 0.0
2 0.24 0.24 0.0 0.0 0.0
3 0.00 0.25 0.3 0.0 0.0
For increase perfromance is used numpy
, solution from @Divakar:
N = 2
#https://stackoverflow.com/a/61518029/2901002
idx = np.argsort(-df.to_numpy(), kind='mergesort')[:,:N]
mask = np.zeros(df.shape, dtype=bool)
np.put_along_axis(mask, idx, True, axis=-1)
df = df.where(mask, 0)
print (df)
col_A col_B col_C col_D col_E
0 0.00 0.00 0.0 0.4 0.5
1 0.00 0.10 0.1 0.0 0.0
2 0.24 0.24 0.0 0.0 0.0
3 0.00 0.25 0.3 0.0 0.0
Upvotes: 4