Reputation: 325
I have a three-class problem and I'm able to report precision and recall for each class with the below code:
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
which gives me the precision and recall nicely for each of the 3 classes in a table format.
My question is how can I now get sensitivity and specificity for each of the 3 classes? I looked at sklearn.metrics and I didn't find anything for reporting sensitivity and specificity.
Upvotes: 4
Views: 6572
Reputation: 138
Building on @StupidWolf solution, to have all columns:
import numpy as np
import pandas as pd
from IPython.display import display # Optional
from sklearn.metrics import (
classification_report,
precision_recall_fscore_support,
)
res = []
for class_p in classes:
prec, recall, fbeta_score, support = precision_recall_fscore_support(
np.array(y_true) == class_p,
np.array(y_pred) == class_p,
pos_label=True,
average=None,
)
res.append(
[
class_p,
prec[1],
recall[1],
recall[0],
fbeta_score[1],
support[1],
]
)
df_res = pd.DataFrame(
res,
columns=[
"class",
"precision",
"recall",
"specificity",
"f1-score",∏
"support",
],
)
display(df_res)
Upvotes: 0
Reputation: 1
Classification report's output is a formatted string. This code snippet extracts the required values and stores it in a 2-D list. Note: To understand the code better, add print statements to check the variable values.
y = classification_report(y_test,y_pred) #classification report's output is a string
lines = y.split('\n') #extract every line and store in a list
res = [] #list to store the cleaned results
for i in range(len(lines)):
line = lines[i].split(" ") #Values are separated by blanks. Split at the blank spaces.
line = [j for j in line if j!=''] #add only the values into the list
if len(line) != 0:
#empty lines get added as empty lists. Remove those
res.append(line)
Upvotes: 0
Reputation: 46908
If we check the help page for classification report:
Note that in binary classification, recall of the positive class is also known as “sensitivity”; recall of the negative class is “specificity”.
So we can convert the pred into a binary for every class, and then use the recall results from precision_recall_fscore_support
.
Using an example:
from sklearn.metrics import classification_report
y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]
target_names = ['class 0', 'class 1', 'class 2']
print(classification_report(y_true, y_pred, target_names=target_names))
Looks like:
precision recall f1-score support
class 0 0.50 1.00 0.67 1
class 1 0.00 0.00 0.00 1
class 2 1.00 0.67 0.80 3
accuracy 0.60 5
macro avg 0.50 0.56 0.49 5
weighted avg 0.70 0.60 0.61 5
Using sklearn:
from sklearn.metrics import precision_recall_fscore_support
res = []
for l in [0,1,2]:
prec,recall,_,_ = precision_recall_fscore_support(np.array(y_true)==l,
np.array(y_pred)==l,
pos_label=True,average=None)
res.append([l,recall[0],recall[1]])
put the results into a dataframe:
pd.DataFrame(res,columns = ['class','sensitivity','specificity'])
class sensitivity specificity
0 0 0.75 1.000000
1 1 0.75 0.000000
2 2 1.00 0.666667
Upvotes: 8