pittnerf
pittnerf

Reputation: 801

GradientBoostingClassifier train loss increasing and no convergence

I am trying to find a model for my multi category classification problem. I have a training set of 150k records, X_train.shape = (150000, 89) and y_train.shape = (150000,) with 462 category integer labels. I wanted to give a try to sklearn.ensemble.GradientBoostingClassifier to see how it performs. The problem is that the training loss is increasing and not decreasing:

Starting Learning rate:  0.01
      Iter       Train Loss   Remaining Time 
         1      560305.4652         4495.28m
         2 49997116709991915540048202694656.0000         4821.85m
         3 83239558948150798998862338330957347606091880446602191149465600.0000         4930.27m
         4 83239558948150798998862338330957347606091880446602191149465600.0000         4930.59m
         5 83239558948150798998862338330957347606091880446602191149465600.0000         4894.59m
         6 528425156187558281292347469394171433826548228598829759650220334971581416568393759237556439905294529429284743947837505536.0000         4873.90m
         7 528425156187558281292347469394171433826548228598829759650220334971581416568393759237556439905294529429284743947837505536.0000         4867.15m
         8 528425156187558281292347469394171433826548228598829759650220334971581416568393759237556439905294529429284743947837505536.0000         4860.32m
...

What I am doing wrong here? My code:

import sklearn.model_selection
import sklearn.datasets
import sklearn.metrics
import numpy as np
X_train = np.load("X_train_automl.npy")
X_test = np.load("X_val_automl.npy")
y_train = np.load("Y_train_automl.npy")
y_test = np.load("Y_val_automl.npy")
y_train = y_train.astype(int)
y_test = y_test.astype(int)


from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.ensemble import GradientBoostingClassifier

lr_list = [0.01, 0.05, 0.1, 0.25, 0.5, 0.75, 1]
#max_depth=2,,  random_state=0, n_estimators=20, 
for learning_rate in lr_list:
    print("Starting Learning rate: ", learning_rate)
    gb_clf = GradientBoostingClassifier(learning_rate=learning_rate, max_features="auto", verbose =2, max_depth=5, n_estimators=500)
    gb_clf.fit(X_train, y_train)

    print("Learning rate: ", learning_rate)
    print("Accuracy score (training): {0:.3f}".format(gb_clf.score(X_train, y_train)))
    print("Accuracy score (validation): {0:.3f}".format(gb_clf.score(X_val, y_val)))

Upvotes: 0

Views: 541

Answers (1)

pittnerf
pittnerf

Reputation: 801

I found that by decreasing the learning rate from 0.01 to 0.001 the loss function started to decrease..... So it seems that the learning rate was too high ...

Upvotes: 3

Related Questions