sds
sds

Reputation: 60054

pandas: which threshold applies to each row?

Given a score column, e.g.,

scores = pd.DataFrame({"score":np.random.randn(10)})

and thresholds

thresholds = pd.DataFrame({"threshold":[0.2,0.5,0.8]},index=[7,13,33])

I would like to find the applicable threshold for each score, e.g.,

      score   threshold
 0 -1.613293   NaN
 1 -1.357980   NaN
 2  0.325720     7
 3  0.116000   NaN
 4  1.423171    33
 5  0.282557     7
 6 -1.195269   NaN
 7  0.395739     7
 8  1.072041    33
 9  0.197853   NaN

IOW, for each score s I want the threshold t such that

t = min(t: thresholds.threshold[t] < s)

How do I do that?

PS. based on the deleted answer:

pd.cut(scores.score, bins=[-np.inf]+list(thresholds.threshold)+[np.inf],
       labels=["low"]+list(thresholds.index))

Upvotes: 0

Views: 254

Answers (3)

koPytok
koPytok

Reputation: 3723

You can achieve it using np.digitize:

indeces = [None,] + thresholds.index.tolist()
scores["score"].apply(
    lambda x: indeces[np.digitize(x, thresholds["threshold"])])

Upvotes: 1

user3483203
user3483203

Reputation: 51165

Using pd.cut

scores['threshold'] = pd.cut(
                         scores.score,
                         bins=thresholds.threshold.values.tolist() + [np.nan],
                         labels=thresholds.index.values
                      )

      score threshold
0 -1.613293       NaN
1 -1.357980       NaN
2  0.325720       7.0
3  0.116000       NaN
4  1.423171      33.0
5  0.282557       7.0
6 -1.195269       NaN
7  0.395739       7.0
8  1.072041      33.0
9  0.197853       NaN

This answer claims cut is slower than apply with digitize. While cut will raise a warning here, those timings are on a very small dataframe and I believe they are misleading. Here are some on a larger dataframe:

scores = pd.DataFrame({"score":np.random.randn(10)})
scores = pd.concat([scores]*10000)

%timeit pd.cut(scores.score,thresholds.threshold.values.tolist() + [np.nan],labels=thresholds.index.values)
4.41 ms ± 39.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

indeces = [None,] + thresholds.index.tolist()

%timeit scores["score"].apply(lambda x: indeces[np.digitize(x, thresholds["threshold"])])
1.64 s ± 18.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

apply simply hides an explicit for-loop, while cut is vectorized.

Upvotes: 2

Ben.T
Ben.T

Reputation: 29635

you can use merge_asof with some manipulation to get the exact result.

(pd.merge_asof( scores.reset_index().sort_values('score'), 
                thresholds.reset_index(), 
                left_on='score', right_on= 'threshold', suffixes = ('','_'))
     .drop('threshold',1).rename(columns={'index_':'threshold'})
     .set_index('index').sort_index())

and with your data, you get:

          score  threshold
index                     
0     -1.613293        NaN
1     -1.357980        NaN
2      0.325720        7.0
3      0.116000        NaN
4      1.423171       33.0
5      0.282557        7.0
6     -1.195269        NaN
7      0.395739        7.0
8      1.072041       33.0
9      0.197853        NaN

Upvotes: 0

Related Questions