Reputation: 226
I'm trying to find out how to use the linear regression with GridSearchCV, but i get a nasty error, and I don't get if this is a problem of estimator not correct for GridSearchCV or if this is my "LogisticRegression" that is not set correctly. I made it work for random forest and knn, but i'm stuck with this implementation.
I use a small dataset, that's why I want to use the liblinear (even if it is by default as described in the documentation).
tuned_parameters = {'C': [0.1, 0.5, 1, 5, 10, 50, 100]}
clf = GridSearchCV(LogisticRegression(solver='liblinear'), tuned_parameters, cv=5, scoring="accuracy")
clf.fit(X_train, y_train)
and the error:
StratifiedShuffleSplit(n_splits=1, random_state=0, test_size=0.4,
train_size=None)
Traceback (most recent call last):
File "linearRegression.py", line 105, in <module>
clf.fit(X_train, y_train)
File "/usr/local/lib/python2.7/dist-packages/sklearn/model_selection/_search.py", line 945, in fit
return self._fit(X, y, groups, ParameterGrid(self.param_grid))
File "/usr/local/lib/python2.7/dist-packages/sklearn/model_selection/_search.py", line 564, in _fit
for parameters in parameter_iterable
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 758, in __call__
while self.dispatch_one_batch(iterator):
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 608, in dispatch_one_batch
self._dispatch(tasks)
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 571, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/_parallel_backends.py", line 109, in apply_async
result = ImmediateResult(func)
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/_parallel_backends.py", line 326, in __init__
self.results = batch()
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py", line 131, in __call__
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "/usr/local/lib/python2.7/dist-packages/sklearn/model_selection/_validation.py", line 260, in _fit_and_score
test_score = _score(estimator, X_test, y_test, scorer)
File "/usr/local/lib/python2.7/dist-packages/sklearn/model_selection/_validation.py", line 288, in _score
score = scorer(estimator, X_test, y_test)
File "/usr/local/lib/python2.7/dist-packages/sklearn/metrics/scorer.py", line 91, in __call__
y_pred = estimator.predict(X)
File "/usr/local/lib/python2.7/dist-packages/sklearn/linear_model/base.py", line 336, in predict
scores = self.decision_function(X)
File "/usr/local/lib/python2.7/dist-packages/sklearn/linear_model/base.py", line 320, in decision_function
dense_output=True) + self.intercept_
File "/usr/local/lib/python2.7/dist-packages/sklearn/utils/extmath.py", line 189, in safe_sparse_dot
return fast_dot(a, b)
TypeError: Cannot cast array data from dtype([('f0', 'f8'), ('f1','f8')]) to dtype('float64') according to the rule 'safe'
I read the documentation: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
and
Thanks for your help.
EDIT: Shape of X and Y:
X = np.array(Xlist,np.dtype('float,float')) #-> two floats as features y = np.array(ylist,np.dtype('int')) #-> label 0 or 1
example: X_train is
[[(0.0, 0.0) (3.85, 0.0)] [(3.6, 0.0) (2.45, 0.0)] [(1.1, 0.0) (1.35, 0.0)] [(3.7, 0.0) (1.85, 0.0)]]
Y_train is
[1 0 0 0 1 0 1 1]
Upvotes: 0
Views: 6374
Reputation: 226
ok a friend of mine solved it:
I was using:
X = np.array(Xlist,np.dtype('float,float'))
y = np.array(ylist,np.dtype('int'))
and it wouldn't do well with this estimator, even if it was working with these classifiers:
SVC(kernel='rbf'), SVC(kernel='linear'), SVC(kernel='poly'), NeighborsClassifier(), DecisionTreeClassifier(), RandomForestClassifier()
So I just replaced these 2 lines by:
X = np.asarray(Xlist)
y = np.asarray(ylist)
Upvotes: 0
Reputation: 1382
Could it be that you entered the X data set as a list of tuples: (A,B), instead of a list of arrays:[A,B]?
I was able to run the following code with scikit-learn==0.18.1:
## Libraries
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LogisticRegression
X = [[0.0, 0.0], [3.85, 0.0], [3.6, 0.0], [2.45, 0.0], [1.1, 0.0], [1.35, 0.0], [3.7, 0.0], [1.85, 0.0]]
y = [1, 0, 0, 0, 1, 0, 1, 1]
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.33, random_state=42)
tuned_parameters = {'C': [0.1, 0.5, 1, 5, 10, 50, 100]}
clf = GridSearchCV(LogisticRegression(solver='liblinear'), tuned_parameters, cv=3, scoring="accuracy")
clf.fit(X_train, y_train)
Note: I had to reduce the cv attribute of GridSearchCV because there isn't a large enough data set to divided into 5 parts.
Upvotes: 1