scls
scls

Reputation: 17617

Differences between each F1-score values in sklearns.metrics.classification_report and sklearns.metrics.f1_score with a binary confusion matrix

I have (true) boolean values and predicted boolean values like:

y_true = np.array([True, True, False, False, False, True, False, True, True,
       False, True, False, False, False, False, False, True, False,
        True, True, True, True, False, False, False, True, False,
        True, False, False, False, False, True, True, False, False,
       False, True, True, True, True, False, False, False, False,
        True, False, False, False, False, False, False, False, False,
       False, True, True, False, True, False, True, True, True,
       False, False, True, False, True, False, False, True, False,
       False, False, False, False, False, False, False, True, False,
        True, True, True, True, False, False, True, False, True,
        True, False, True, False, True, False, False, True, True,
       False, False, True, True, False, False, False, False, False,
       False, True, True, False])

y_pred = np.array([False, False, False, False, False, True, False, False, True,
       False, True, False, False, False, False, False, False, False,
        True, True, True, True, False, False, False, False, False,
       False, False, False, False, False, True, False, False, False,
       False, True, False, False, False, False, False, False, False,
        True, False, False, False, False, False, False, False, False,
       False, True, False, False, False, False, False, False, False,
       False, False, True, False, False, False, False, True, False,
       False, False, False, False, False, False, False, True, False,
       False, True, False, False, False, False, True, False, True,
        True, False, False, False, True, False, False, True, True,
       False, False, True, True, False, False, False, False, False,
       False, True, False, False])

I'm using the following imports

from sklearn.metrics import f1_score, classification_report, confusion_matrix

Confusion matrix looks like:

print(confusion_matrix(y_true, y_pred))

[[67  0]
 [21 24]]

I'm doing:

print("f1_score: %f" % f1_score(y_true, y_pred))
print(classification_report(y_true, y_pred))

I get:

f1_score: 0.695652
             precision    recall  f1-score   support

      False       0.76      1.00      0.86        67
       True       1.00      0.53      0.70        45

avg / total       0.86      0.81      0.80       112

I see 4 values of f1-score (0.695652, 0.86, 0.70, 0.80). I wonder what are differences between each values and how they are calculated.

Upvotes: 0

Views: 782

Answers (1)

Batuhan B
Batuhan B

Reputation: 1855

I think that 0.695652 is the same thing with 0.70. In the scikit-learn f1_score documentation explains that in default mode : F1 score gives the positive class in binary classification.

Also you can easily reach the score of 0.86 with the formulation of F1 score. The formulation of F1 score is

F1 = 2 * (precision * recall) / (precision + recall)

EDIT :

Confusion matrix is something like that :

                    Prediction
                    FALSE | TRUE
True Value  FALSE    67      0
            TRUE     21      24

67 = True Negative, 0 = False Negative
21 = False Positive, 24  = True Positive

In finding the avg / total, formula uses this values like you said in the comment.

Upvotes: 2

Related Questions