Pratik Hemani
Pratik Hemani

Reputation: 67

Confusion Matrix Python

I have been trying to learn logistic regression in python. I want a way to evaluate my model via a confusion matrix. Since I'm new to python, I don't know how to do it. A google search showed me how to create one but I am getting results to just create the matrix. I want more statistical inferences. In R, the "caret" package has a function called confusionMatrix which provides a lot of useful information. For Example: Code:

library(caret)
x <- c(1,0,1,1,0,1,1,0,0,1)
y <- c(0,1,1,1,0,1,0,0,0,0)
x <- as.factor(x)
y <- as.factor(y)
confusionMatrix(x,y)

Output:

Confusion Matrix and Statistics

          Reference
Prediction 0 1
         0 3 1
         1 3 3

               Accuracy : 0.6             
                 95% CI : (0.2624, 0.8784)
    No Information Rate : 0.6             
    P-Value [Acc > NIR] : 0.6331          

                  Kappa : 0.2308          

 Mcnemar's Test P-Value : 0.6171          

            Sensitivity : 0.500           
            Specificity : 0.750           
         Pos Pred Value : 0.750           
         Neg Pred Value : 0.500           
             Prevalence : 0.600           
         Detection Rate : 0.300           
   Detection Prevalence : 0.400           
      Balanced Accuracy : 0.625           

       'Positive' Class : 0 

Is there a way to create a similar output in python? Also, I need a way to plot the ROC curve. Please help me I'm new to python.

Upvotes: 2

Views: 1450

Answers (1)

Sy Ker
Sy Ker

Reputation: 2180

1. I use this code to plot a confusion matrix with scikit-learn;

from sklearn.metrics import confusion_matrix
from matplotlib import pyplot as plt
import matplotlib.ticker as ticker

# make predictions replace clf with your trained classifier
y_pred = clf.predict(X_test)

# create the confusion matrix
conf_mat = confusion_matrix(y_true=y_test, y_pred=y_pred)

# create the axis to plot onto 
fig = plt.figure()
ax = fig.add_subplot(111)

# plot the matrix
cax = ax.matshow(conf_mat, cmap=plt.cm.Blues)
fig.colorbar(cax)

# labels 
plt.xlabel('Predicted')
plt.ylabel('Expected')

plt.show()

2. For the ROC curve, you need a classifier with a decision function. From the documentation;

# caculate ROC for all class 

y_score = classifier.fit(X_train, y_train).decision_function(X_test)

# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
    fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
    roc_auc[i] = auc(fpr[i], tpr[i])

# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])

Note:

  • fpr: contains the false positive rates

  • tpr: contains the true positive rates

Than plot what you have found for each class;

# plot of a ROC curve for a specific class

plt.figure()
lw = 2
plt.plot(fpr[2], tpr[2], color='darkorange',
         lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()

3. Classification report;

from sklearn.metrics import classification_report

print(classification_report(y_test_pred, y_test))

This is an example report for 10 classes;

                precision    recall  f1-score   support

           0       0.42      0.64      0.51     76061
           1       0.00      0.34      0.01       450
           2       0.40      0.65      0.50     15627
           3       0.24      0.50      0.32     69567
           4       0.12      0.63      0.21      4839
           5       0.04      0.48      0.07      2648
           6       0.26      0.49      0.34     44727
           7       0.57      0.55      0.56    189774
           8       0.44      0.66      0.53     66019
           9       0.14      0.64      0.23       810
          10       0.47      0.61      0.53     85557

    accuracy                           0.44   2367204
   macro avg       0.31      0.54      0.35   2367204
weighted avg       0.57      0.44      0.47   2367204

Note:

  • precision = accuracy

  • recall = sensitivity

  • f1_score = harmonic mean of the precision and recall

  • support = class imbalance or relevant entries for the classification

Upvotes: 3

Related Questions