Reputation: 167
I am evaluating text classification predictions , with cross_val_score. I need to evaluate my predictions with recall_score function , but with parameter average = 'macro'. cross_val_score sets it to the default parameter , binary , which doesnt work to my code. Is there any way to call recall_score with a different parameter , or change the default parameter to macro.
results = model_selection.cross_val_score(estimator, X, Y, cv= kfold, scoring= 'recall')
Upvotes: 1
Views: 1520
Reputation: 36599
You can just use "recall_macro" in it like this:
results = model_selection.cross_val_score(estimator, X, Y, cv= kfold, scoring= 'recall_macro')
According to the documentation of metrics
‘f1’ metrics.f1_score for binary targets
‘f1_micro’ metrics.f1_score micro-averaged
‘f1_macro’ metrics.f1_score macro-averaged
‘f1_weighted’ metrics.f1_score weighted average
‘f1_samples’ metrics.f1_score by multilabel sample
‘neg_log_loss’ metrics.log_loss requires predict_proba support
‘precision’ etc. metrics.precision_score suffixes apply as with ‘f1’
‘recall’ etc. metrics.recall_score suffixes apply as with ‘f1’
As you can see, its specified that all suffixes apply to "recall".
Alternatively, you can also use make_scorer
like this:
# average can take values from 'macro', 'micro', 'weighted' etc as specified above
scorer = make_scorer(recall_score, pos_label=None, average='macro')
results = model_selection.cross_val_score(estimator, X, Y, cv= kfold,
scoring= scorer)
Upvotes: 3