sectechguy
sectechguy

Reputation: 2117

How to view the rows marked as False Positive and False Negative from confusion matrix

I have created a very simple Artificial Neural Network in Python. In the example below I get an accuracy based off of values in a confusion matrix. These are results of the confusion matrix:

array([[3990,    2],
       [  56,  172]])

I am interested in finding the rows where it was marked as false positive(2) and false negative(56).

The following is my code:

#Import the dataset
X = DBF2.iloc[:, 1:2].values
y = DBF2.iloc[:, 2].values

#Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 0] = labelencoder_X_1.fit_transform(X[:, 0])

#Create dummy variables
onehotencoder = OneHotEncoder(categorical_features = [0])
X = onehotencoder.fit_transform(X).toarray()
#Remove 2 variables to avoid falling into the dummy variable trap
X = np.delete(X, [0], axis=1)

#Splitting the dataset
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, 
random_state = 0)

#Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)

#Make the ANN
import keras
from keras.models import Sequential
from keras.layers import Dense

#Initialising the ANN
classifier = Sequential()

#Adding the input layer and the first hidden layer
classifier.add(Dense(units=200, kernel_initializer='uniform', activation='relu', input_dim=400))

#Adding a second hidden layer
classifier.add(Dense(units=200, kernel_initializer='uniform', 
activation='relu'))

#Adding the output layer
classifier.add(Dense(units=1, kernel_initializer='uniform', activation='sigmoid'))

#Compiling the ANN
classifier.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

#Fitting the ANN to the training set
classifier.fit(X_train, y_train, batch_size=10, epochs=20)                                

#Predicting the Test set results
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)

#Making the confusion matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)

#Find accuracy of test set
TruePos = cm[0,0]
FalsePos = cm[0,1]
TrueNeg = cm[1,1]
FalseNeg = cm[1,0]

accuracy = float(TruePos + TrueNeg) / float(TruePos + FalsePos + TrueNeg + FalseNeg)
accuracy = accuracy*100
print "Test Accuracy: " ,accuracy

Upvotes: 1

Views: 4620

Answers (3)

Николай Х
Николай Х

Reputation: 21

Mark all row:

def mark_ravel_data(y_true:np.array, predicted:np.array)-> np.array:
    #np.hstack(y_true, predicted)
    
    @numba.jit
    def mark_ravel(y_true, predicted):
        #print (rg, '  val:', y_true, '>>> pred:', predicted,)
        return  y_true | predicted << 1
    
    tp = mark_ravel(True , True   )
    tn = mark_ravel(False , True  )
    fp = mark_ravel(True  , False )
    fn = mark_ravel(False , False )
    print (f'tp:{tp} tn:{tn} fp:{fp} fn:{fn}')

    v_mark_ravel = np.vectorize(mark_ravel, otypes=[int])
    
    
    return v_mark_ravel(y_true=y_true, predicted=predicted)


print ('tp, tn, fp, fn, tp,')
mark_ravel_data(np.array([1,0,1,0, 1]), np.array([1,1,0,0,1]))

output:

tp, tn, fp, fn, tp,
tp:3 tn:2 fp:1 fn:0 
array([3, 2, 1, 0, 3])

Upvotes: 1

Academic
Academic

Reputation: 295

A longer but smoother option would be to feed the actual and predicted values into 2 separate lists and generate miss-classifications. Index the miss-classifications and then use the "loc" function to extract that certain row!

Upvotes: 0

ysearka
ysearka

Reputation: 3855

In order to do that, you can use a mask on ypred and ytest:

X_test[(y_test == 1) & (y_pred[:,0].T == 0)]
X_test[(y_test == 0) & (y_pred[:,0].T == 1)]

Or if you don't care about separating FN from FP:

X_test[(y_test != y_pred[:,0].T).T]

Upvotes: 9

Related Questions