Reputation: 124
I'm working on a text classification problem that trains well but my categories are quite imbalanced, hindering results. The largest 2 categories are over 80x larger than the smallest category, so an unfair amount of the classifications go to those 2 categories. I need to select n
rows (arbitrarily large) from each category. My dataset is quite large (10m rows, 1k unique categories).
Let's say the dataframe is:
data = {
'category':['2','2','2','2','4','4','4','4','4','4','6','6','6'],
'text':['t1','t2','t3','t4','t5','t6','t7','t8','t9','t10','t11','t12','t13']
}
df = pd.DataFrame(data)
How could I select n
random rows per category?
I have tried to find some way to use np.random.choice
to select n
random rows but I can't find a way to grab that index for a drop by index.
The ideal output for n = 3
would be something like:
>>> df.head(9)
category text
0 2 t3
1 6 t11
2 6 t13
3 4 t6
4 2 t1
5 4 t9
6 4 t8
7 2 t4
8 6 t12
Upvotes: 4
Views: 1652
Reputation: 150745
You can use sample
and groupby().head()
:
df.sample(frac=1).groupby('category').head(3)
Output:
category text
4 4 t5
12 6 t13
1 2 t2
8 4 t9
9 4 t10
3 2 t4
10 6 t11
0 2 t1
11 6 t12
Upvotes: 5