Aceconhielo
Aceconhielo

Reputation: 3636

How to get results from custom loss function in Keras?

I want to implement a custom loss function in Python and It should work like this pseudocode:

aux = | Real - Prediction | / Prediction
errors = []
if aux <= 0.1:
 errors.append(0)
elif aux > 0.1 & <= 0.15:
 errors.append(5/3)
elif aux > 0.15 & <= 0.2:
 errors.append(5)
else:
 errors.append(2000)
return sum(errors)

I started to define the metric like this:

def custom_metric(y_true,y_pred):
    # y_true:
    res = K.abs((y_true-y_pred) / y_pred, axis = 1)
    ....

But I do not know how to get the value of the res for the if and else. Also I want to know what have to return the function.

Thanks

Upvotes: 7

Views: 3780

Answers (3)

Mashyu
Mashyu

Reputation: 7

Appending to self directly didnt work for me, instead appending to params dict of self did the job, answering op it would be self.params['error'] = [], then add to the array as you see fit.

class CustomCallback(tf.keras.callbacks.Callback):
     
     def on_train_begin(self, logs=None):
          self.params['error'] = []

     def on_epoch_end(self, epochs, logs=None):
          #do something with self.params['error']

history = model.fit(callbacks = [CustomCallback()])

#When train ends

error = history.params['error']

Upvotes: 0

Mihai Alexandru-Ionut
Mihai Alexandru-Ionut

Reputation: 48437

Also I want to know what have to return the function.

Custom metrics can be passed at the compilation step.

The function would need to take (y_true, y_pred) as arguments and return a single tensor value.

But I do not know how to get the value of the res for the if and else.

You can return the result from result_metric function.

def custom_metric(y_true,y_pred):
     result = K.abs((y_true-y_pred) / y_pred, axis = 1)
     return result

The second step is to use a keras callback function in order to find the sum of the errors.

The callback can be defined and passed to the fit method.

history = CustomLossHistory()
model.fit(callbacks = [history])

The last step is to create the the CustomLossHistory class in order to find out the sum of your expecting errors list.

CustomLossHistory will inherit some default methods from keras.callbacks.Callback.

  • on_epoch_begin: called at the beginning of every epoch.
  • on_epoch_end: called at the end of every epoch.
  • on_batch_begin: called at the beginning of every batch.
  • on_batch_end: called at the end of every batch.
  • on_train_begin: called at the beginning of model training.
  • on_train_end: called at the end of model training.

You can read more in the Keras Documentation

But for this example we only need on_train_begin and on_batch_end methods.

Implementation

class LossHistory(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self.errors= []

    def on_batch_end(self, batch, logs={}):
         loss = logs.get('loss')
         self.errors.append(self.loss_mapper(loss))

    def loss_mapper(self, loss):
         if loss <= 0.1:
             return 0
         elif loss > 0.1 & loss <= 0.15:
             return 5/3
         elif loss > 0.15 & loss <= 0.2:
             return 5
         else:
             return 2000

After your model is trained you can access your errors using following statement.

errors = history.errors

Upvotes: 6

Alexander Harnisch
Alexander Harnisch

Reputation: 644

I'll take a leap here and say this won't work because it is not differentiable. The loss needs to be continuously differentiable so you can propagate a gradient through there.

If you want to make this work you need to find a way to do this without discontinuity. For example you could try a weighted average over your 4 discrete values where the weights strongly prefer the closest value.

Upvotes: 1

Related Questions