Fluxy
Fluxy

Reputation: 2978

Hyperparameters optimization gives worse results

I trained my random forest classifier as follows:

rf = RandomForestClassifier(n_jobs=-1, max_depth = None, max_features = "auto", 
                            min_samples_leaf = 1, min_samples_split = 2, 
                            n_estimators = 1000, oob_score=True, class_weight="balanced", 
                            random_state=0)
​
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
​
print("Confusion matrix")
print(metrics.confusion_matrix(y_test, y_pred))
print("F1-score")
print(metrics.f1_score(y_test, y_pred, average="weighted"))
print("Accuracy")
print(metrics.accuracy_score(y_test, y_pred))
print(metrics.classification_report(y_test, y_pred))

and got the following results:

Confusion matrix
[[558  42   2   0   1]
 [ 67 399  84   3   2]
 [ 30 135 325  48   7]
 [  5  69  81 361  54]
 [  8  17   7  48 457]]
F1-score
0.7459670332027826
Accuracy
0.7473309608540926
              precision    recall  f1-score   support

           1       0.84      0.93      0.88       603
           2       0.60      0.72      0.66       555
           3       0.65      0.60      0.62       545
           4       0.78      0.63      0.70       570
           5       0.88      0.85      0.86       537

Then I decided to perform a hyperparameters optimization in order to improve this result.

clf = RandomForestClassifier(random_state = 0, n_jobs=-1)
param_grid = { 
    'n_estimators': [1000,2000],
    'max_features': [0.2, 0.5, 0.7, 'auto'],
    'max_depth' : [None, 10],
    'min_samples_leaf': [1, 2, 3, 5],
    'min_samples_split': [0.1, 0.2]
}

k_fold = StratifiedKFold(n_splits=10, shuffle=True, random_state=0)
clf = GridSearchCV(estimator=clf, 
                   param_grid=param_grid, 
                   cv=k_fold,
                   scoring='accuracy',
                   verbose=True)

clf.fit(X_train, y_train)

But it gave me worse results if I do y_pred = clf.best_estimator_.predict(X_test):

Confusion matrix
[[533  68   0   0   2]
 [145 312  70   0  28]
 [ 58 129 284  35  39]
 [ 21  68  73 287 121]
 [ 32  12   3  36 454]]
F1-score
0.6574507466273805
Accuracy
0.6654804270462633
              precision    recall  f1-score   support

           1       0.68      0.88      0.77       603
           2       0.53      0.56      0.55       555
           3       0.66      0.52      0.58       545
           4       0.80      0.50      0.62       570
           5       0.70      0.85      0.77       537

I assume that it's happening because of scoring='accuracy'. Which score should I use to get the same or better result as my initial random forest?

Upvotes: 2

Views: 854

Answers (1)

MaximeKan
MaximeKan

Reputation: 4211

Defining scoring='accuracy' in your gridsearch should not be responsible for this difference, because this would be the default anyway for a Random Forest Classifier.

The reason why you have an unexpected difference here is because you have specified class_weight="balanced" in your first random forest rf, but not in the second classifier clf. As a result, your classes are weighted differently when calculating the accuracy score, which eventually leads to different performance metrics.

To correct this, just define clf through:

clf = RandomForestClassifier(random_state = 0, n_jobs=-1, class_weight="balanced")

Upvotes: 1

Related Questions