SergeantIdiot
SergeantIdiot

Reputation: 113

Error in model.fit() when using custom loss function

I have defined a custom loss function for my model.

def get_loss(y_hat, y):
loss = tf.keras.losses.BinaryCrossentropy(y_hat,y)  # cross entropy (but no logits)


y_hat = tf.math.sigmoid(y_hat)

tp = tf.math.reduce_sum(tf.multiply(y_hat, y),[1,2])
fn = tf.math.reduce_sum((y - tf.multiply(y_hat, y)),[1,2])
fp = tf.math.reduce_sum((y_hat -tf.multiply(y_hat,y)),[1,2])
loss = loss - ((2 * tp) / tf.math.reduce_sum((2 * tp + fp + fn + 1e-10)))  # fscore

return loss

When fitting my model to my training data I get following error:

TypeError: Expected float32, got <tensorflow.python.keras.losses.BinaryCrossentropy object at 0x7feca46d0d30> of type 'BinaryCrossentropy' instead.

How can I fix this? I already tried to use:

loss=tf.int32(tf.keras.losses.BinaryCrossentropy(y_hat,y)

but this spits out another error and seems to not be the solution I need

Upvotes: 0

Views: 270

Answers (1)

Nicolas Gervais
Nicolas Gervais

Reputation: 36724

You need to call the instantiated object, rather than passing the input as arguments. As such:

loss = tf.keras.losses.BinaryCrossentropy()(y_hat,y)

Notice the extra set of parentheses. Or, do it like this:

loss = tf.keras.losses.binary_crossentropy(y_hat, y)

Upvotes: 3

Related Questions