DataBach
DataBach

Reputation: 1633

How to make use of the eval_result method in Light GBM pipeline?

I use the ligth GBM Algorithm and have created a pipeline which looks much like the following:

#model definition
model_lgbm = LGBMClassifier(
                #training loss
                objective='binary', # write a custom objective function that is cost sensitive
                n_estimators =  params['n_estimators'],
                max_depth =  params['max_depth'])

#pipeline instantiation using a previoulsy defined feature engineering pipeline (it does scaling etc.)
model_pipeline_lgbm = Pipeline(steps=[('preprocessor', feature_pipe_lgbm),
                                      ('model_lgbm', model_lgbm),
                                     ])

#fit of feature pipeline and transformation of validation sets
feature_pipe_lgbm.fit(X_train, Y_train)

X_val_trans = feature_pipe_lgbm.transform(X_val)
X_train_trans = feature_pipe_lgbm.transform(X_train)

encoded_column_names = ['f{}'.format(i) for i in range(X_val_trans.shape[1])]
X_val_trans = pd.DataFrame(data=X_val_trans, columns=encoded_column_names, index=X_val.index)

X_train_trans = pd.DataFrame(
    data=X_train_trans, columns=encoded_column_names, index=X_train.index)

#definition of evaluation set and evaluation metric
eval_metric = "binary_logloss"
eval_set = [(X_train_trans, Y_train), (X_val_trans, Y_val)]

I then fit the pipeline and would like to store the evaluation result in a dictionary as shown in this repo:

evals_result = {}
model_pipeline_lgbm.fit(X=X_train,
                        y=Y_train,
                        model_lgbm__eval_set=eval_set,
                        # validation loss
                        model_lgbm__eval_metric=eval_metric, #same here consider cost sensitvity
                        model_lgbm__early_stopping_rounds= params['early_stopping_patience'],
                        model_lgbm__evals_result=evals_result
                        )

However, I receive the following error:

TypeError: fit() got an unexpected keyword argument 'evals_result'

Do you know where I would need to define eval_results in my pipeline, so I can call upon it for creating plots ?

Upvotes: 0

Views: 2206

Answers (2)

DataBach
DataBach

Reputation: 1633

Thanks @Berriel, you gave me the missing piece of information. I was just not accessing the pipeline steps correctly. In the end this worked:

model_pipeline_lgbm.steps[1][1].evals_result_

Upvotes: 0

Berriel
Berriel

Reputation: 13601

You should be able to access it through the LGBMClassifier after the .fit call:

model_pipeline_lgbm.fit(...)

model_pipeline_lgbm.steps['model_lgbm'].evals_result_

Upvotes: 1

Related Questions