eemamedo
eemamedo

Reputation: 327

Results from GridSearchCV/RandomizedSearchCV cannot be reproduced by running a single model using the same parameters

I am running RandomizedSearchCV with 5-folds in order to find best parameters. I have a hold-out set (X_test) that I use to predict. My portion of code is:

svc= SVC(class_weight=class_weights, random_state=42)
Cs = [0.01, 0.1, 1, 10, 100, 1000, 10000]
gammas = [1e-1, 1e-2, 1e-3, 1e-4, 1e-5]

param_grid = {'C': Cs,
              'gamma': gammas,
              'kernel': ['linear', 'rbf', 'poly']}

my_cv = TimeSeriesSplit(n_splits=5).split(X_train)
rs_svm = RandomizedSearchCV(SVC(), param_grid, cv = my_cv, scoring='accuracy', 
                              refit='accuracy', verbose = 3, n_jobs=1, random_state=42)
rs_svm.fit(X_train, y_train)
y_pred = rs_svm.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
clfreport = classification_report(y_test, y_pred)
print (rs_svm.best_params_)

The result is classification report: Results after RS

Now, I am interested in reproducing this result using a run-alone model (no randomizedsearchCV) with the selected parameters:

from sklearn.model_selection import TimeSeriesSplit
tcsv=TimeSeriesSplit(n_splits=5)
for train_index, test_index in tcsv.split(X_train):
    train_index_ = int(train_index.shape[0])
    test_index_ = int(test_index.shape[0])
    X_train_, y_train_ = X_train[0:train_index_],y_train[0:train_index_]
    X_test_, y_test_ = X_train[test_index_:],y_train[test_index_:]
    class_weights = compute_class_weight('balanced', np.unique(y_train_), y_train_)
    class_weights = dict(enumerate(class_weights))
    svc= SVC(C=0.01, gamma=0.1, kernel='linear', class_weight=class_weights, verbose=True,
             random_state=42)
    svc.fit(X_train_, y_train_)
    
y_pred_=svc.predict(X_test)
cm = confusion_matrix(y_test, y_pred_)
clfreport = classification_report(y_test, y_pred_)

In my understanding, the clfreports should be identical but my result after this run are:

Stand-Alone model

Does anyone have any suggestions why that might be happening?

Upvotes: 0

Views: 1046

Answers (1)

desertnaut
desertnaut

Reputation: 60321

Given your 1st code snippet, where you use RandomizedSearchCV to find the best hyperparameters, you don't need to do any splitting again; so, in your 2nd snippet, you should just fit using the found hyperparameters and the class weights using the whole of your training set, and then predict on your test set:

class_weights = compute_class_weight('balanced', np.unique(y_train), y_train)
class_weights = dict(enumerate(class_weights))
svc= SVC(C=0.01, gamma=0.1, kernel='linear', class_weight=class_weights, verbose=True, random_state=42)
svc.fit(X_train, y_train)

y_pred_=svc.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
clfreport = classification_report(y_test, y_pred)

The discussion in Order between using validation, training and test sets might be useful for clarifying the procedure...

Upvotes: 4

Related Questions