Reputation: 3793
I have a Dataframe (data
) for which the head looks like the following:
status datetime country amount city
601766 received 1.453916e+09 France 4.5 Paris
669244 received 1.454109e+09 Italy 6.9 Naples
I would like to predict the status
given datetime, country, amount
and city
Since status, country, city
are string, I one-hot-encoded them:
one_hot = pd.get_dummies(data['country'])
data = data.drop(item, axis=1) # Drop the column as it is now one_hot_encoded
data = data.join(one_hot)
I then create a simple LinearRegression model and fit my data:
y_data = data['status']
classifier = LinearRegression(n_jobs = -1)
X_train, X_test, y_train, y_test = train_test_split(data, y_data, test_size=0.2)
columns = X_train.columns.tolist()
classifier.fit(X_train[columns], y_train)
But I got the following error:
could not convert string to float: 'received'
I have the feeling I miss something here and I would like to have some inputs on how to proceed. Thank you for having read so far!
Upvotes: 7
Views: 23513
Reputation: 4273
Alternative (because you should really avoid using LabelEncoder
on features).
ColumnTransformer and OneHotEncoder can one-hot encode features in a dataframe:
ct = ColumnTransformer(
transformers=[
("ohe", OneHotEncoder(sparse_output=False), ["country", "city"]),
],
remainder="passthrough",
).set_output(transform="pandas")
print(ct.fit_transform(X))
ohe__country_France ohe__country_Italy ohe__city_Naples ohe__city_Paris remainder__datetime remainder__amount
0 1.0 0.0 0.0 1.0 1.4539 4.5
1 0.0 1.0 1.0 0.0 1.4541 6.9
2 1.0 0.0 0.0 1.0 1.4561 5.0
Full pipeline with LogisticRegression:
import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
raw_data = pd.DataFrame([["received", 1.4539, "France", 4.5, "Paris"], ["received", 1.4541, "Italy", 6.9, "Naples"], ["not-received", 1.4561, "France", 5.0, "Paris"]], columns=["status", "datetime", "country", "amount", "city"])
# X features include all variables except 'status', y label is 'status':
X = raw_data.drop(["status"], axis=1)
y = raw_data["status"]
# Create a pipeline with OHE for "country" and "city", then fits Logistic Regression:
pipe = make_pipeline(
ColumnTransformer(
transformers=[
("one-hot-encode", OneHotEncoder(), ["country", "city"]),
],
remainder="passthrough",
),
LogisticRegression(),
)
pipe.fit(X, y)
Upvotes: 4
Reputation: 189
To do a one-hot encoding in a scikit-learn project, you may find it cleaner to use the scikit-learn-contrib project category_encoders: https://github.com/scikit-learn-contrib/categorical-encoding, which includes many common categorical variable encoding methods including one-hot.
Upvotes: 1
Reputation: 210862
Consider the following approach:
first let's one-hot-encode all non-numeric columns:
In [220]: from sklearn.preprocessing import LabelEncoder
In [221]: x = df.select_dtypes(exclude=['number']) \
.apply(LabelEncoder().fit_transform) \
.join(df.select_dtypes(include=['number']))
In [228]: x
Out[228]:
status country city datetime amount
601766 0 0 1 1.453916e+09 4.5
669244 0 1 0 1.454109e+09 6.9
now we can use LinearRegression
classifier:
In [230]: classifier.fit(x.drop('status',1), x['status'])
Out[230]: LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
Upvotes: 5