Reputation: 30687
I'm learning different methods to convert categorical variables to numeric for machine-learning classifiers. I came across the pd.get_dummies
method and sklearn.preprocessing.OneHotEncoder()
and I wanted to see how they differed in terms of performance and usage.
I found a tutorial on how to use OneHotEncoder()
on https://xgdgsc.wordpress.com/2015/03/20/note-on-using-onehotencoder-in-scikit-learn-to-work-on-categorical-features/ since the sklearn
documentation wasn't too helpful on this feature. I have a feeling I'm not doing it correctly...but
Can some explain the pros and cons of using pd.dummies
over sklearn.preprocessing.OneHotEncoder()
and vice versa? I know that OneHotEncoder()
gives you a sparse matrix but other than that I'm not sure how it is used and what the benefits are over the pandas
method. Am I using it inefficiently?
import pandas as pd
import numpy as np
from sklearn.datasets import load_iris
sns.set()
%matplotlib inline
#Iris Plot
iris = load_iris()
n_samples, m_features = iris.data.shape
#Load Data
X, y = iris.data, iris.target
D_target_dummy = dict(zip(np.arange(iris.target_names.shape[0]), iris.target_names))
DF_data = pd.DataFrame(X,columns=iris.feature_names)
DF_data["target"] = pd.Series(y).map(D_target_dummy)
#sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) \
#0 5.1 3.5 1.4 0.2
#1 4.9 3.0 1.4 0.2
#2 4.7 3.2 1.3 0.2
#3 4.6 3.1 1.5 0.2
#4 5.0 3.6 1.4 0.2
#5 5.4 3.9 1.7 0.4
DF_dummies = pd.get_dummies(DF_data["target"])
#setosa versicolor virginica
#0 1 0 0
#1 1 0 0
#2 1 0 0
#3 1 0 0
#4 1 0 0
#5 1 0 0
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
def f1(DF_data):
Enc_ohe, Enc_label = OneHotEncoder(), LabelEncoder()
DF_data["Dummies"] = Enc_label.fit_transform(DF_data["target"])
DF_dummies2 = pd.DataFrame(Enc_ohe.fit_transform(DF_data[["Dummies"]]).todense(), columns = Enc_label.classes_)
return(DF_dummies2)
%timeit pd.get_dummies(DF_data["target"])
#1000 loops, best of 3: 777 µs per loop
%timeit f1(DF_data)
#100 loops, best of 3: 2.91 ms per loop
Upvotes: 151
Views: 80883
Reputation: 347
This question was asked long ago, but is still relevant in 2023.
In one sentence: Both can be used for the task, which one to choose depends on personal preference and other circumstances.
In a bit more detail:
For both OneHotEncoder and get_dummies it is possible and the most robust way to explicitly specify the categories. For OneHotEncoder this can be achieved using the "categories" parameter, which is a list-of-lists. For get_dummies you need to convert the relevant columns to categorical with the appropriate categories.
OneHotEncoder assumes you want to encode all columns in your data, so if it is not the case you have to either manually select/transform/join-with-original-columns or wrap the OneHotEncoder in a column transformer. This is much easier using get_dummies.
If you like to stay in DataFrame space during your data processing pipeline, then pandas.get_dummies is the most direct way, but if you rely on scikit Pipeline-s then OneHotEncoder wrapped in a column transformer is more straightforward.
For a full explanation with examples read my article on towards data science.
Upvotes: 0
Reputation: 1982
I really like Carl's answer and upvoted it. I will just expand Carl's example a bit so that more people hopefully will appreciate that pd.get_dummies can handle unknown. The two examples below shows that pd.get_dummies can accomplish the same thing in handling unknown as OHE .
# data is from @dzieciou's comment above
>>> data =pd.DataFrame(pd.Series(['good','bad','worst','good', 'good', 'bad']))
# new_data has two values that data does not have.
>>> new_data= pd.DataFrame(
pd.Series(['good','bad','worst','good', 'good', 'bad','excellent', 'perfect']))
>>> df = pd.get_dummies(data)
>>> col_list = df.columns.tolist()
>>> print(df)
0_bad 0_good 0_worst
0 0 1 0
1 1 0 0
2 0 0 1
3 0 1 0
4 0 1 0
5 1 0 0
6 0 0 0
7 0 0 0
>>> new_df = pd.get_dummies(new_data)
# handle unknow by using .reindex and .fillna()
>>> new_df = new_df.reindex(columns=col_list).fillna(0.00)
>>> print(new_df)
# 0_bad 0_good 0_worst
# 0 0 1 0
# 1 1 0 0
# 2 0 0 1
# 3 0 1 0
# 4 0 1 0
# 5 1 0 0
# 6 0 0 0
# 7 0 0 0
>>> encoder = OneHotEncoder(handle_unknown="ignore", sparse=False)
>>> encoder.fit(data)
>>> encoder.transform(new_data)
# array([[0., 1., 0.],
# [1., 0., 0.],
# [0., 0., 1.],
# [0., 1., 0.],
# [0., 1., 0.],
# [1., 0., 0.],
# [0., 0., 0.],
# [0., 0., 0.]])
Upvotes: 9
Reputation: 69
why wouldn't you just cache or save the columns as variable col_list from the resulting get_dummies then use pd.reindex to align the train vs test datasets.... example:
df = pd.get_dummies(data)
col_list = df.columns.tolist()
new_df = pd.get_dummies(new_data)
new_df = new_df.reindex(columns=col_list).fillna(0.00)
Upvotes: 6
Reputation: 8131
For machine learning, you almost definitely want to use sklearn.OneHotEncoder
. For other tasks like simple analyses, you might be able to use pd.get_dummies
, which is a bit more convenient.
Note that sklearn.OneHotEncoder
has been updated in the latest version so that it does accept strings for categorical variables, as well as integers.
The crux of it is that the sklearn
encoder creates a function which persists and can then be applied to new data sets which use the same categorical variables, with consistent results.
from sklearn.preprocessing import OneHotEncoder
# Create the encoder.
encoder = OneHotEncoder(handle_unknown="ignore")
encoder.fit(X_train) # Assume for simplicity all features are categorical.
# Apply the encoder.
X_train = encoder.transform(X_train)
X_test = encoder.transform(X_test)
Note how we apply the same encoder we created via X_train
to the new data set X_test
.
Consider what happens if X_test
contains different levels than X_train
for one of its variables. For example, let's say X_train["color"]
contains only "red"
and "green"
, but in addition to those, X_test["color"]
sometimes contains "blue"
.
If we use pd.get_dummies
, X_test
will end up with an additional "color_blue"
column which X_train
doesn't have, and the inconsistency will probably break our code later on, especially if we are feeding X_test
to an sklearn
model which we trained on X_train
.
And if we want to process the data like this in production, where we're receiving a single example at a time, pd.get_dummies
won't be of use.
With sklearn.OneHotEncoder
on the other hand, once we've created the encoder, we can reuse it to produce the same output every time, with columns only for "red"
and "green"
. And we can explicitly control what happens when it encounters the new level "blue"
: if we think that's impossible, then we can tell it to throw an error with handle_unknown="error"
; otherwise we can tell it to continue and simply set the red and green columns to 0, with handle_unknown="ignore"
.
Upvotes: 255
Reputation: 20872
OneHotEncoder
cannot process string values directly. If your nominal features are strings, then you need to first map them into integers.
pandas.get_dummies
is kind of the opposite. By default, it only converts string columns into one-hot representation, unless columns are specified.
Upvotes: 71