Reputation: 103
My Goal is to calculate the AUC, Specificity, Sensitivity with 95 % CI from a 5*10 StratifiedKfold CV. I also need the Specificity and Sensitivity for a Threshold of 0.4 to maximize the Sensitivity.
So far I was able to implement it for the AUC. Code below:
seed = 42
# Grid Search
fit_intercept=[True, False]
C = [np.arange(1,41,1)]
penalty = ['l1', 'l2']
params = dict(C=C, fit_intercept = fit_intercept, penalty = penalty)
print(params)
logreg = LogisticRegression(random_state=seed)
# instantiate the grid
logreg_grid = GridSearchCV(logreg, param_grid = params , cv=5, scoring='roc_auc', iid='False')
# fit the grid with data
logreg_grid.fit(X_train, y_train)
logreg = logreg_grid.best_estimator_
cv = RepeatedStratifiedKFold(n_splits = 5, n_repeats = 10, random_state = seed)
logreg_scores = cross_val_score(logreg, X_train, y_train, cv=cv, scoring='roc_auc')
print('LogReg:',logreg_scores.mean())
import scipy.stats
def mean_confidence_interval(data, confidence=0.95):
a = 1.0 * np.array(data)
n = len(a)
m, se = np.mean(a), scipy.stats.sem(a)
h = se * scipy.stats.t.ppf((1 + confidence) / 2, n-1)
return m, m-h, m+h
mean_confidence_interval(logreg_scores, confidence=0.95)
Output: (0.7964761904761904, 0.7675441789148183, 0.8254082020375626)
I am really satisfied so far, but how can I implement this for the probabilities, so I can calculate the FPR, TPR and the Thresholds? For a simple 5-fold I would do it like this:
def evaluate_threshold(threshold):
print('Sensitivity(',threshold,'):', tpr[thresholds > threshold][-1])
print('Specificity(',threshold,'):', 1 - fpr[thresholds > threshold][-1])
logreg_proba = cross_val_predict(logreg, X_train, y_train, cv=5, method='predict_proba')
fpr, tpr, thresholds = metrics.roc_curve(y_train, log_proba[:,1])
evaluate_threshold(0.5)
evaluate_threshold(0.4)
#Output would be:
#Sensitivity( 0.5 ): 0.76
#Specificity( 0.5 ): 0.7096774193548387
#Sensitivity( 0.4 ): 0.88
#Specificity( 0.4 ): 0.6129032258064516
If I try it this way with the 5*10 CV:
cv = RepeatedStratifiedKFold(n_splits = 5, n_repeats = 10, random_state = seed)
y_pred = cross_val_predict(logreg, X_train, y_train, cv=cv, method='predict_proba')
fpr, tpr, thresholds = metrics.roc_curve(y_train, log_proba[:,1])
evaluate_threshold(0.5)
evaluate_threshold(0.4)
it throws an error:
cross_val_predict only works for partitions
Can you help me to solve this please?
Upvotes: 1
Views: 509
Reputation: 103
That is what I have tried.
for i in range(10):
cv = StratifiedKFold(n_splits = 5, random_state = i)
y_pred = cross_val_predict(logreg, X_train, y_train, cv=cv, method='predict_proba')
fpr, tpr, thresholds = metrics.roc_curve(y_train, log_proba[:,1])
evaluate_threshold(0.5)
Out:
Sensitivity( 0.5 ): 0.84
Specificity( 0.5 ): 0.6451612903225806
Sensitivity( 0.5 ): 0.84
Specificity( 0.5 ): 0.6451612903225806
Sensitivity( 0.5 ): 0.84
Specificity( 0.5 ): 0.6451612903225806
and so on....
Unfortunately the output is always the same and is not what I would expect when using RepeatedStratifiedKFold.
Maybe someone can give me an advice?
Upvotes: 1