RNA
RNA

Reputation: 153621

regression model evaluation using scikit-learn

I am doing regression with sklearn and use random grid search to evaluate different parameters. Here is a toy example:

from sklearn.datasets import make_regression
from sklearn.metrics import mean_squared_error, make_scorer
from scipy.stats import randint as sp_randint
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.cross_validation import LeaveOneOut
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
X, y = make_regression(n_samples=10,
                       n_features=10,
                       n_informative=3,
                       random_state=0,
                       shuffle=False)

clf = ExtraTreesRegressor(random_state=12)
param_dist = {"n_estimators": [5, 10],
              "max_depth": [3, None],
              "max_features": sp_randint(1, 11),
              "min_samples_split": sp_randint(1, 11),
              "min_samples_leaf": sp_randint(1, 11),
              "bootstrap": [True, False]}
rmse = make_scorer(mean_squared_error, greater_is_better=False)
r = RandomizedSearchCV(clf, param_distributions=param_dist,
                       cv=10,
                       scoring='mean_squared_error',
                       n_iter=3,
                       n_jobs=2)
r.fit(X, y)

My questions are:

1) does RandomizedSearchCV use r2 as scoring function? It is not documented what the default scoring function is for regression.

2) Even I used mean_squared_error as scoring function in the code, why the scores are negative (shown below)? mean_squared_error should all be positive. And then when I calculate r.score(X,y), it seems reporting R2 again. The scores in all these contexts are very confusing to me.

In [677]: r.grid_scores_
Out[677]: 
[mean: -35.18642, std: 13.81538, params: {'bootstrap': True, 'min_samples_leaf': 9, 'n_estimators': 5, 'min_samples_split': 3, 'max_features': 3, 'max_depth': 3},
 mean: -15.07619, std: 6.77384, params: {'bootstrap': False, 'min_samples_leaf': 7, 'n_estimators': 10, 'min_samples_split': 10, 'max_features': 10, 'max_depth': None},
 mean: -17.91087, std: 8.97279, params: {'bootstrap': True, 'min_samples_leaf': 7, 'n_estimators': 10, 'min_samples_split': 7, 'max_features': 7, 'max_depth': None}]

In [678]: r.grid_scores_[0].cv_validation_scores
Out[678]: 
array([-37.74058826, -26.73444271, -36.15443525, -23.11874605,
       -33.60726519, -33.4821689 , -36.14897322, -43.80499446,
       -68.50480995, -12.97342433])

In [680]: r.score(X,y)
Out[680]: 0.87989839693054017

Upvotes: 3

Views: 6249

Answers (1)

Fred Foo
Fred Foo

Reputation: 363817

  1. Just like GridSearchCV, RandomizedSearchCV uses the score method on the estimator by default. ExtraTreesRegressor and other regression estimators return the R² score from this method (classifiers return accuracy).

  2. The convention is that a score is something to maximize. Mean squared error is a loss function to minimize, so it's negated inside the search.

And then when I calculate r.score(X,y), it seems reporting R2 again.

That's not pretty. It's arguably a bug.

Upvotes: 3

Related Questions