Reputation: 1230
I'm currently doing a research for multi class classification. I used categorical crossentropy and i've got a really good result using accuracy as the metrics of the experiment. When i try to use categorical_accuracy, it gives a slightly worse accuracy (1% below). My question will be, is it ok to use accuracy metrics for categorical crossentropy loss instead of the categorical_accuracy?
Upvotes: 13
Views: 13671
Reputation: 2135
Keras detects the output_shape and automatically determines which accuracy to use when accuracy
is specified. For multi-class classification, categorical_accuracy
will be used internally. From the source:
if metric == 'accuracy' or metric == 'acc':
# custom handling of accuracy
# (because of class mode duality)
output_shape = self.internal_output_shapes[i]
acc_fn = None
if output_shape[-1] == 1 or self.loss_functions[i] == losses.binary_crossentropy:
# case: binary accuracy
acc_fn = metrics_module.binary_accuracy
elif self.loss_functions[i] == losses.sparse_categorical_crossentropy:
# case: categorical accuracy with sparse targets
acc_fn = metrics_module.sparse_categorical_accuracy
else:
acc_fn = metrics_module.categorical_accuracy
The 1% difference you are seeing can likely be attributed to run-to-run variation, as stochastic gradient descent will encounter different minima, unless the same random seed is used.
Upvotes: 31