Reputation: 2312
I am working with FER2013Plus dataset from https://github.com/Microsoft/FERPlus which contains the fer2013new.csv file. This file contains labels for each image in the dataset. An example on labels could be:
(4, 0, 0, 2, 1, 0, 0, 3)
where each dimension is a different emotion. Finally, in their paper https://arxiv.org/pdf/1608.01041.pdf, they converted the labels distribution into probabilities => the new label would become
(0.5, 0, 0, 0.25, 0.125, 0, 0, 0.375)
In other words, the person in the image is happy with probability of 0.5, sad with probability of 0.25 and so on... And the sum of of the probabilities is 1.
Now while training I used tf.nn.softmax_cross_entropy_with_logits_v2
to calculate the loss between my predictions and the labels. Now how to compute the accuracy?
Any help is much appreciated!!
Upvotes: 0
Views: 682
Reputation: 821
Here is an excerpt from the paper:
"We take the majority emotion as the single emotion label, and we measure prediction accuracy against the majority emotion."
They are using a discrete classification task. So you just need to take the tf.argmax()
on your logits to get the highest probability, and then compare that with the tf.argmax()
of the labels.
For example, if your label is (0.5, 0, 0, 0.25, 0.125, 0, 0, 0.375)
, then happy is the majority emotion, so you would check if your logits had happy as the majority emotion as well.
Upvotes: 2