Reputation: 1859
I am trying to define custom loss and accuracy functions for each output in a two output neural network model using Keras. Let's call the two outputs: A and B.
My objectives are:
A_output_acc
, val_A_output_acc
, A_output_loss
and val_A_output_loss
. So I want the corresponding metric readouts for the A
output in this new model to have those names as well so that they are viewable/comparable on the same graph in tensorboard.I have a Modeler
class that constructs and compiles a network. The relevant code follows.
class Modeler(BaseModeler):
def __init__(self, loss=None,accuracy=None, ...):
"""
Returns compiled keras model.
"""
self.loss = loss
self.accuracy = accuracy
model = self.build()
...
model.compile(
loss={ # we are explicit here and name the outputs even though in this case it's not necessary
"A_output": self.A_output_loss(),#loss,
"B_output": self.B_output_loss()#loss
},
optimizer=optimus,
metrics= { # we need to tie each output to a specific list of metrics
"A_output": [self.A_output_acc()],
# self.A_output_loss()], # redundant since it's already reported via `loss` param,
# ends up showing up as `A_output_loss_1` since keras
# already reports `A_output_loss` via loss param
"B_output": [self.B_output_acc()]
# self.B_output_loss()] # redundant since it's already reported via `loss` param
# ends up showing up as `B_output_loss_1` since keras
# already reports `B_output_loss` via loss param
})
self._model = model
def A_output_acc(self):
"""
Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
other models that same outputs.
:return: accuracy metric
"""
acc = None
if self.accuracy == TypedAccuracies.BINARY:
def acc(y_true, y_pred):
return self.binary_accuracy(y_true, y_pred)
elif self.accuracy == TypedAccuracies.DICE:
def acc(y_true, y_pred):
return self.dice_coef(y_true, y_pred)
elif self.accuracy == TypedAccuracies.JACARD:
def acc(y_true, y_pred):
return self.jacard_coef(y_true, y_pred)
else:
logger.debug('ERROR: undefined accuracy specified: {}'.format(self.accuracy))
return acc
def A_output_loss(self):
"""
Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
other models that same outputs.
:return: loss metric
"""
loss = None
if self.loss == TypedLosses.BINARY_CROSSENTROPY:
def loss(y_true, y_pred):
return self.binary_crossentropy(y_true, y_pred)
elif self.loss == TypedLosses.DICE:
def loss(y_true, y_pred):
return self.dice_coef_loss(y_true, y_pred)
elif self.loss == TypedLosses.JACARD:
def loss(y_true, y_pred):
return self.jacard_coef_loss(y_true, y_pred)
else:
logger.debug('ERROR: undefined loss specified: {}'.format(self.accuracy))
return loss
def B_output_acc(self):
"""
Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
other models that same outputs.
:return: accuracy metric
"""
acc = None
if self.accuracy == TypedAccuracies.BINARY:
def acc(y_true, y_pred):
return self.binary_accuracy(y_true, y_pred)
elif self.accuracy == TypedAccuracies.DICE:
def acc(y_true, y_pred):
return self.dice_coef(y_true, y_pred)
elif self.accuracy == TypedAccuracies.JACARD:
def acc(y_true, y_pred):
return self.jacard_coef(y_true, y_pred)
else:
logger.debug('ERROR: undefined accuracy specified: {}'.format(self.accuracy))
return acc
def B_output_loss(self):
"""
Allows us to output custom train/test accuracy/loss metrics to desired names e.g. 'A_output_acc' and
'val_A_output_acc' respectively so that they may be plotted on same tensorboard graph as the accuracies from
other models that same outputs.
:return: loss metric
"""
loss = None
if self.loss == TypedLosses.BINARY_CROSSENTROPY:
def loss(y_true, y_pred):
return self.binary_crossentropy(y_true, y_pred)
elif self.loss == TypedLosses.DICE:
def loss(y_true, y_pred):
return self.dice_coef_loss(y_true, y_pred)
elif self.loss == TypedLosses.JACARD:
def loss(y_true, y_pred):
return self.jacard_coef_loss(y_true, y_pred)
else:
logger.debug('ERROR: undefined loss specified: {}'.format(self.accuracy))
return loss
def load_model(self, model_path=None):
"""
Returns built model from model_path assuming using the default architecture.
:param model_path: str, path to model file
:return: defined model with weights loaded
"""
custom_objects = {'A_output_acc': self.A_output_acc(),
'A_output_loss': self.A_output_loss(),
'B_output_acc': self.B_output_acc(),
'B_output_loss': self.B_output_loss()}
self.model = load_model(filepath=model_path, custom_objects=custom_objects)
return self
def build(self, stuff...):
"""
Returns model architecture. Instead of just one task, it performs two: A and B.
:return: model
"""
...
A_conv_final = Conv2D(1, (1, 1), activation="sigmoid", name="A_output")(up_conv_224)
B_conv_final = Conv2D(1, (1, 1), activation="sigmoid", name="B_output")(up_conv_224)
model = Model(inputs=[input], outputs=[A_conv_final, B_conv_final], name="my_model")
return model
The training works fine. However, when I go to load the model for inference later, using the above load_model()
function, Keras complains that it doesn't know about the custom metrics I have given it:
ValueError: Unknown loss function:loss
What seems to be happening is that Keras is appending the returned function created in each of the custom metric functions above (def loss(...)
, def acc(...)
) to the dictionary key given in the metrics
parameter of the model.compile()
call.
So, for example the key is A_output
and we call the custom accuracy function, A_output_acc()
for it, which returns a function called acc
. So the result is A_output
+ acc
= A_output_acc
. This means that I can't name those returned functions: acc
/loss
something else, because that will mess up the reporting/graphs.
This is all fine and well, BUT I don't know how to write my load
function with a properly defined custom_objects
parameter (or define/name my custom metrics functions for that matter) so that Keras knows which custom accuracy/loss functions are to be loaded with each output head.
More to the point, it seems to be wanting a custom_objects
dictionary of the following form in load_model()
(which won't work for obvious reasons):
custom_objects = {'acc': self.A_output_acc(),
'loss': self.A_output_loss(),
'acc': self.B_output_acc(),
'loss': self.B_output_loss()}
instead of:
custom_objects = {'A_output_acc': self.A_output_acc(),
'A_output_loss': self.A_output_loss(),
'B_output_acc': self.B_output_acc(),
'B_output_loss': self.B_output_loss()}
Any insights or work-arounds?
Thanks!
EDIT:
I've confirmed the reasoning above about key/function name concatenation IS correct for the metrics
parameter of Keras' model.compile()
call. HOWEVER, for the loss
parameter in model.compile()
, Keras just concatenates the key with the word loss
, yet expects the name of the custom loss function in the custom_objects
parameter of model.load_model()
...go figure.
Upvotes: 2
Views: 1560
Reputation: 1
Remove the () at the end of your losses and metrics and that should be it. It'll look like this instead
loss={
"A_output": self.A_output_loss,
"B_output": self.B_output_loss
}
Upvotes: 0