anwartheravian
anwartheravian

Reputation: 1101

Pandas One hot encoding: Bundling together less frequent categories

I'm doing one hot encoding over a categorical column which has some 18 different kind of values. I want to create new columns for only those values, which appear more than some threshold (let's say 1%), and create another column named other values which has 1 if value is other than those frequent values.

I'm using Pandas with Sci-kit learn. I've explored pandas get_dummies and sci-kit learn's one hot encoder, but can't figure out how to bundle together less frequent values into one column.

Upvotes: 6

Views: 2555

Answers (4)

Ka Wa Yip
Ka Wa Yip

Reputation: 2983

An improved version:

  • The previous solutions do not scale quite well when the dataframe is large.

  • The situation also becomes complicated when you want to perform one-hot encoding for one column only and your original dataframe has more than one columns.

Here is a more general and scalable (faster) solution.

It is illustrated with an example df with two columns and 1 million rows:

import pandas as pd
import string
df = pd.DataFrame(
    {'1st': [random.sample(["orange", "apple", "banana"], k=1)[0] for i in range(1000000)],\
     '2nd': [random.sample(list(string.ascii_lowercase), k=1)[0] for i in range(1000000)]}
    )

The first 10 rows df.head(10) is:

    1st     2nd
0   banana  t
1   orange  t
2   banana  m
3   banana  g
4   banana  g
5   orange  a
6   apple   x
7   orange  s
8   orange  d
9   apple   u

The statistics df['2nd'].value_counts() is :

s    39004
k    38726
n    38720
b    38699
t    38688
p    38646
u    38638
w    38611
y    38587
o    38576
q    38559
x    38558
r    38545
i    38497
h    38429
v    38385
m    38369
j    38278
f    38262
e    38241
a    38241
l    38236
g    38210
z    38202
c    38058
d    38035
Step 1: Define threshold
threshold = 38500
Step 2: Focus on the column(s) you want to do one-hot encoding on, and change the entries with frequency lower than the threshold to others
%timeit df.loc[df['2nd'].value_counts()[df['2nd']].values < threshold, '2nd'] = "others"

Time taken is 206 ms ± 346 µs per loop (mean ± std. dev. of 7 runs, 1 loop each).

Step 3: Apply one-hot encoding as usual
df = pd.get_dummies(df, columns = ['2nd'], prefix='', prefix_sep='')

The first 10 rows after one-hot encoding df.head(10) becomes

    1st     b   k   n   o   others  p   q   r   s   t   u   w   x   y
0   banana  0   0   0   0   0       0   0   0   0   1   0   0   0   0
1   orange  0   0   0   0   0       0   0   0   0   1   0   0   0   0
2   banana  0   0   0   0   1       0   0   0   0   0   0   0   0   0
3   banana  0   0   0   0   1       0   0   0   0   0   0   0   0   0
4   banana  0   0   0   0   1       0   0   0   0   0   0   0   0   0
5   orange  0   0   0   0   1       0   0   0   0   0   0   0   0   0
6   apple   0   0   0   0   0       0   0   0   0   0   0   0   1   0
7   orange  0   0   0   0   0       0   0   0   1   0   0   0   0   0
8   orange  0   0   0   0   1       0   0   0   0   0   0   0   0   0
9   apple   0   0   0   0   0       0   0   0   0   0   1   0   0   0
Step 4 (optional): If you want others to be the last column of the df, you can try:
df = df[[col for col in df.columns if col != 'others'] + ['others']]

This shifts others to the last column.

    1st     b   k   n   o   p   q   r   s   t   u   w   x   y   others
0   banana  0   0   0   0   0   0   0   0   1   0   0   0   0   0
1   orange  0   0   0   0   0   0   0   0   1   0   0   0   0   0
2   banana  0   0   0   0   0   0   0   0   0   0   0   0   0   1
3   banana  0   0   0   0   0   0   0   0   0   0   0   0   0   1
4   banana  0   0   0   0   0   0   0   0   0   0   0   0   0   1
5   orange  0   0   0   0   0   0   0   0   0   0   0   0   0   1
6   apple   0   0   0   0   0   0   0   0   0   0   0   1   0   0
7   orange  0   0   0   0   0   0   0   1   0   0   0   0   0   0
8   orange  0   0   0   0   0   0   0   0   0   0   0   0   0   1
9   apple   0   0   0   0   0   0   0   0   0   1   0   0   0   0

Upvotes: 0

bohontw
bohontw

Reputation: 310

pip install siuba 
#( in python or anaconda prompth shell)

#use library as:
from siuba.dply.forcats import fct_lump, fct_reorder 

#just like fct_lump of R :

df['Your_column'] = fct_lump(df['Your_column'], n= 10)

df['Your_column'].value_counts() # check your levels

#it reduces the level to 10, lumps all the others as 'Other'

R has a good function fct_lump for this purpose, now it is copied to python, simply you select the number of levels to keep and all the other levels will be bundled as 'others' .

Upvotes: 0

piRSquared
piRSquared

Reputation: 294228

plan

  • pd.get_dummies to one hot encode as normal
  • sum() < threshold to identify columns that get aggregated
    • I use pd.value_counts with the parameter normalize=True to get percentage of occurance.
  • join

def hot_mess2(s, thresh):
    d = pd.get_dummies(s)
    f = pd.value_counts(s, sort=False, normalize=True) < thresh
    if f.sum() == 0:
        return d
    else:
        return d.loc[:, ~f].join(d.loc[:, f].sum(1).rename('other'))

Consider the pd.Series s

s = pd.Series(np.repeat(list('abcdef'), range(1, 7)))

s

0     a
1     b
2     b
3     c
4     c
5     c
6     d
7     d
8     d
9     d
10    e
11    e
12    e
13    e
14    e
15    f
16    f
17    f
18    f
19    f
20    f
dtype: object

hot_mess(s, 0)

    a  b  c  d  e  f
0   1  0  0  0  0  0
1   0  1  0  0  0  0
2   0  1  0  0  0  0
3   0  0  1  0  0  0
4   0  0  1  0  0  0
5   0  0  1  0  0  0
6   0  0  0  1  0  0
7   0  0  0  1  0  0
8   0  0  0  1  0  0
9   0  0  0  1  0  0
10  0  0  0  0  1  0
11  0  0  0  0  1  0
12  0  0  0  0  1  0
13  0  0  0  0  1  0
14  0  0  0  0  1  0
15  0  0  0  0  0  1
16  0  0  0  0  0  1
17  0  0  0  0  0  1
18  0  0  0  0  0  1
19  0  0  0  0  0  1
20  0  0  0  0  0  1

hot_mess(s, .1)

    c  d  e  f  other
0   0  0  0  0      1
1   0  0  0  0      1
2   0  0  0  0      1
3   1  0  0  0      0
4   1  0  0  0      0
5   1  0  0  0      0
6   0  1  0  0      0
7   0  1  0  0      0
8   0  1  0  0      0
9   0  1  0  0      0
10  0  0  1  0      0
11  0  0  1  0      0
12  0  0  1  0      0
13  0  0  1  0      0
14  0  0  1  0      0
15  0  0  0  1      0
16  0  0  0  1      0
17  0  0  0  1      0
18  0  0  0  1      0
19  0  0  0  1      0
20  0  0  0  1      0

Upvotes: 4

johnchase
johnchase

Reputation: 13705

How about something like the following:

create a data frame

df = pd.DataFrame(data=list('abbgcca'), columns=['x'])
df

    x
0   a
1   b
2   b
3   g
4   c 
5   c
6   a

Replace values that are present less frequently than a given threshold. I'll create a copy of the column so that I'm not modifying the original dataframe. First step is to create a dictionary of the value_counts and then replace the actual values with those counts so that they can be compared to the threshold. Set values below that threshold to 'other values' then use pd.get_dummies to get the dummy variables

#set the threshold for example 20%
thresh = 0.2
x = df.x.copy()
#replace any values present less than the threshold with 'other values'
x[x.replace(x.value_counts().to_dict()) < len(x)*thresh] = 'other values'
#get dummies
pd.get_dummies(x)

        a       b       c       other values
    0   1.0     0.0     0.0     0.0
    1   0.0     1.0     0.0     0.0
    2   0.0     1.0     0.0     0.0
    3   0.0     0.0     0.0     1.0
    4   0.0     0.0     1.0     0.0
    5   0.0     0.0     1.0     0.0
    6   1.0     0.0     0.0     0.0

Alternatively you could use Counter it may be a bit cleaner

from collections import Counter
x[x.replace(Counter(x)) < len(x)*thresh] = 'other values'

Upvotes: 4

Related Questions