iamkk
iamkk

Reputation: 135

per-class validation accuracy during training

Keras gives the overall training and validation accuracy during training.

enter image description here

Is there any way to get a per-class validation accuracy during training?

Update: Error log from Pycharm

File "C:/Users/wj96hq/PycharmProjects/PedestrianClassification/Awareness.py", line 82, in <module>
shuffle=True, callbacks=callbacks)
File "C:\Users\wj96hq\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py", line 66, in _method_wrapper
return method(self, *args, **kwargs)
File "C:\Users\wj96hq\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\engine\training.py", line 876, in fit
callbacks.on_epoch_end(epoch, epoch_logs)
File "C:\Users\wj96hq\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\keras\callbacks.py", line 365, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "C:/Users/wj96hq/PycharmProjects/PedestrianClassification/Awareness.py", line 36, in on_epoch_end
x_test, y_test = self.validation_data[0], self.validation_data[1]
TypeError: 'NoneType' object is not subscriptable

Upvotes: 3

Views: 1805

Answers (3)

arilwan
arilwan

Reputation: 3993

Well, accuracy is a global metric and there's no such thing as per-class accuracy. Perhaps you mean proportion of the class correctly identified, that's the exact definition of TPR or recall.

Please refer to answers to this, and this, questions on SO, and this question from Cross Validated StackExchange.

Upvotes: 2

Darth Vader
Darth Vader

Reputation: 921

Use this to get per class accuracy :


model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])


class Metrics(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self._data = []

    def on_epoch_end(self, batch, logs={}):
        x_test, y_test = self.validation_data[0], self.validation_data[1]
        y_predict = np.asarray(model.predict(x_test))

        true = np.argmax(y_test, axis=1)
        pred = np.argmax(y_predict, axis=1)
        
        cm = confusion_matrix(true, pred)
        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
        self._data.append({
            'classLevelaccuracy':cm.diagonal() ,
        })
        return

    def get_data(self):
        return self._data

metrics = Metrics()
history = model.fit(x_train, y_train, epochs=100, validation_data=(x_test, y_test), callbacks=[metrics])
metrics.get_data()

you can make the code change in the metrics class. As you like it ..and this working . Yuo just use metrics.get_data() to get all the info..

Upvotes: 4

MichaelJanz
MichaelJanz

Reputation: 1815

If you want to get the accuracy for a certain class, or a group of certain classes, masking can be a good solution. See this code:

def cus_accuracy(real, pred):

    score = accuracy(real, pred)
    mask = tf.math.greater_equal(real, 5)
    mask = tf.cast(mask, dtype=real.dtype)
    score *= mask

    mask2 = tf.math.less_equal(real, 10)
    mask2 = tf.cast(mask2, dtype=real.dtype)
    score *= mask2

return tf.reduce_mean(score)

This metric gives you the accuracy for the classes 5 to 10. I used it for measuring the accuracy for certain words in a seq2seq model.

Upvotes: 0

Related Questions