SuperTardigrade
SuperTardigrade

Reputation: 105

Gridsearch giving nan values for AUC score

I try to run a grid search on a random forest classifier with AUC score.

Here is my code:

from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.metrics import make_scorer, roc_auc_score

estimator = RandomForestClassifier()
scoring = {'auc': make_scorer(roc_auc_score, multi_class="ovr")}
kfold = RepeatedStratifiedKFold(n_splits=3, n_repeats=10, random_state=42)

grid_search = GridSearchCV(estimator=estimator, param_grid=param_grid,
                           cv=kfold, n_jobs=-1, scoring=scoring)
grid_search.fit(X, y)

However when I run this i get nan values for the AUC scores and the following warning:

UserWarning,
/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_validation.py:687: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details: 
Traceback (most recent call last):
  File "/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_validation.py", line 674, in _score
    scores = scorer(estimator, X_test, y_test)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/metrics/_scorer.py", line 88, in __call__
    *args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/metrics/_scorer.py", line 243, in _score
    **self._kwargs)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/utils/validation.py", line 63, in inner_f
    return f(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/metrics/_ranking.py", line 538, in roc_auc_score
    multi_class, average, sample_weight)
  File "/opt/conda/lib/python3.7/site-packages/sklearn/metrics/_ranking.py", line 595, in _multiclass_roc_auc_score
    if not np.allclose(1, y_score.sum(axis=1)):
  File "/opt/conda/lib/python3.7/site-packages/numpy/core/_methods.py", line 47, in _sum
    return umr_sum(a, axis, dtype, out, keepdims, initial, where)
numpy.AxisError: axis 1 is out of bounds for array of dimension 1

  UserWarning,
/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_search.py:921: UserWarning: One or more of the test scores are non-finite: [nan nan nan ... nan nan nan]
  category=UserWarning
/opt/conda/lib/python3.7/site-packages/sklearn/model_selection/_search.py:921: UserWarning: One or more of the train scores are non-finite: [nan nan nan ... nan nan nan]
  category=UserWarning

Upvotes: 1

Views: 2021

Answers (1)

SuperTardigrade
SuperTardigrade

Reputation: 105

I figured it out actually. I needed to set needs_proba to True in make_scorer function, so that the gridsearch doesn't try to compute auc score directly from the (categorical) predictions of my estimator.

scoring = {'auc': make_scorer(roc_auc_score, needs_proba=True, multi_class="ovr")}

Upvotes: 2

Related Questions