Litchy
Litchy

Reputation: 673

Tensorflow 2 custom loss return nan

I have a model, I compile it using binary_crossentropy, the training process goes well, the loss is printed.

model = MyModel()
model.compile(optimizer="adadelta", loss="binary_crossentropy")

data1, data2 = get_random_data(4, 3) # this method return data1:(1000,4),data2:(1000,3)
model.fit([data1, data2], y, batch_size=4)

Then I write a custom loss function, the loss become nan

import tensorflow.keras.backend as K

class MyModel():
    ...
    def batch_loss(self, y_true, y_pred_batch):
        bottom = K.sum(K.exp(y_pred_batch))
        batch_softmax = K.exp(y_pred_batch) / bottom
        batch_log_likelihood = K.log(batch_softmax)
        loss = K.sum(batch_log_likelihood)
        return loss

model.compile(optimizer="adadelta", loss=model.batch_loss) # change above compile code to this

I use a batch_loss(tf.ones((1,))) to test my loss function, seems it return the correct result.

But when it run together with training, it becomes nan, where should I start to debug?


model and data code (for those who need it to reproduce):

class MyModel(tf.keras.models.Model):
    def __init__(self):
        super().__init__()
        self.t1A = tf.keras.layers.Dense(300, activation='relu', input_dim=1)
        self.t1B = tf.keras.layers.Dense(300, activation='relu', input_dim=1)
        self.t1v = tf.keras.layers.Dense(128, activation='relu')
        self.t2A = tf.keras.layers.Dense(300, activation='relu')
        self.t2B = tf.keras.layers.Dense(300, activation='relu')
        self.t2v = tf.keras.layers.Dense(128, activation='relu')
        self.out = tf.keras.layers.Dot(axes=1)

    def call(self, inputs, training=None, mask=None):
        u, i = inputs[0], inputs[1]
        u = self.t1A(u)
        u = self.t1B(u)
        u = self.t1v(u)
        i = self.t2A(i)
        i = self.t2B(i)
        i = self.t2v(i)
        out = self.out([u, i])
        return out

def get_random_data(user_feature_num, item_feature_num):
    def get_random_ndarray(data_size, dis_list, feature_num):
        data_list = []
        for i in range(feature_num):
            arr = np.random.randint(dis_list[i], size=data_size)
            data_list.append(arr)
        data = np.array(data_list)
        return np.transpose(data, axes=(1, 0))
    uf_dis, if_dis, data_size = [1000, 2, 10, 20], [10000, 50, 60], 1000
    y = np.zeros(data_size)
    for i in range(int(data_size/10)):
        y[i] = 1

    return get_random_ndarray(data_size, uf_dis, feature_num=user_feature_num), \
        get_random_ndarray(data_size, if_dis, feature_num=item_feature_num), y

Upvotes: 0

Views: 972

Answers (2)

Lescurel
Lescurel

Reputation: 11631

The values outputted by your models are quite big. Combined with a call to tf.exp in your function, values quickly grows to nan. You might consider applying an activation function like a sigmoid to keep the values between 0 and 1.

Upvotes: 2

Andrey
Andrey

Reputation: 6367

I think your error is caused by calling exp(). This function quickly growing and returns nan.

Upvotes: 1

Related Questions