Reputation: 3657
Is there a way to have an instance of LogisticRegression()
automatically normalize the data supplied for fitting/training to z-scores
to build the model? LinearRegression()
has a normalize=True
parameter but maybe this doesn't make sense for LogisticRegression()
?
If so, would I have to normalize unlabeled input vectors by hand (i.e., recalculate the mean, standard deviation for each column) before calling predict_proba()
? This would be strange if the model already performed that possibly costly computation.
Thanks
Upvotes: 4
Views: 4254
Reputation: 24742
Is this what you are looking for?
from sklearn.datasets import make_classification
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
X, y = make_classification(n_samples=1000, n_features=100, weights=[0.1, 0.9], random_state=0)
X.shape
# build pipe: first standardize by substracting mean and dividing std
# next do classificaiton
pipe = make_pipeline(StandardScaler(), LogisticRegression(class_weight='auto'))
# fit
pipe.fit(X, y)
# predict
pipe.predict_proba(X)
# to get back mean/std
scaler = pipe.steps[0][1]
scaler.mean_
Out[12]: array([ 0.0313, -0.0334, 0.0145, ..., -0.0247, 0.0191, 0.0439])
scaler.std_
Out[13]: array([ 1. , 1.0553, 0.9805, ..., 1.0033, 1.0097, 0.9884])
Upvotes: 9