AndreasInfo
AndreasInfo

Reputation: 1227

MLP classifier results in different models using almost identical data

I am simulating soccer predictions using scikit-learns MLP classifier. Two model trainings using almost identical data (the second one contains 42 more rows out of 5466 total) and configuration (e.g. random_state) results in the below statistics:

2020-09-19 00:00:00
-------------------------------------------MLPClassifier--------------------------------------------
Fitting 3 folds for each of 27 candidates, totalling 81 fits

GridSearchCV:
Best score : 0.5179227897048015
Best params: {'classifier__alpha': 2.4, 'classifier__hidden_layer_sizes': [3, 3], 'preprocessor__num__scaling': StandardScaler(), 'selector': SelectFromModel(estimator=RandomForestClassifier(n_estimators=10,
                                                 random_state=42),
                threshold='2.1*median'), 'selector__threshold': '2.1*median'}

              precision    recall  f1-score   support

           A       0.59      0.57      0.58      1550
           D       0.09      0.47      0.15       244
           H       0.82      0.57      0.67      3143

    accuracy                           0.57      4937
   macro avg       0.50      0.54      0.47      4937
weighted avg       0.71      0.57      0.62      4937

2020-09-26 00:00:00
-------------------------------------------MLPClassifier--------------------------------------------
Fitting 3 folds for each of 27 candidates, totalling 81 fits

GridSearchCV:
Best score : 0.5253689104507783
Best params: {'classifier__alpha': 2.4, 'classifier__hidden_layer_sizes': [3, 3], 'preprocessor__num__scaling': StandardScaler(), 'selector': SelectFromModel(estimator=RandomForestClassifier(n_estimators=10,
                                                 random_state=42),
                threshold='1.6*median'), 'selector__threshold': '1.6*median'}

              precision    recall  f1-score   support

           A       0.62      0.57      0.59      1611
           D       0.00      0.00      0.00         0
           H       0.86      0.57      0.69      3336

    accuracy                           0.57      4947
   macro avg       0.49      0.38      0.43      4947
weighted avg       0.78      0.57      0.66      4947

How is that possible, that one model never predicts D, while the other one does? I am trying to understand, what's going here. I am afraid, posting the whole problem/code is not possible, so I am looking for a generic answer. I have this behaviour (D's <-> no D's) throughout 38 observations.

Upvotes: 1

Views: 74

Answers (0)

Related Questions