Filip
Filip

Reputation: 806

Specificity at different thresholds (in the same way as sklearn.metrics.precision_recall_curve)

I would like to get the specificities in the same way as precisions and recalls is given by precision_recall_curve.

from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(
    ground_truth,
    predictions,
)

How can I achieve that?

Upvotes: 1

Views: 737

Answers (1)

Filip
Filip

Reputation: 806

So, I looked at the source code for sklearn.metrics.precision_recall_curve (https://github.com/scikit-learn/scikit-learn/blob/2e90b897768fd360ef855cb46e0b37f2b6faaf72/sklearn/metrics/_ranking.py) and altered it to fit my needs.

import numpy as np
from sklearn.metrics.ranking import _binary_clf_curve

def specificity_sensitivity_curve(y_true, probas_pred):
    """
    Compute specificity-sensitivity pairs for different probability thresholds.
    For reference, see 'precision_recall_curve'
    """
    fps, tps, thresholds = _binary_clf_curve(y_true, probas_pred)
    sensitivity = tps / tps[-1]
    specificity = (fps[-1] - fps) / fps[-1]
    last_ind = tps.searchsorted(tps[-1])
    sl = slice(last_ind, None, -1)
    return np.r_[specificity[sl], 1], np.r_[sensitivity[sl], 0], thresholds[sl]

Upvotes: 2

Related Questions