ProteinGuy
ProteinGuy

Reputation: 1942

ERROR IN KERAS CUSTOM LOSS "TypeError: Value passed to parameter 'reduction_indices' has DataType float32 not in list of allowed values: int32, int64"

I have defined a custom loss in Keras for the function:

(y - yhat)^2 + (y * yhat).

def customLoss(y_true, y_pred, sample_weight=None):
    y_true = K.cast(y_true, 'float32')
    y_pred = K.cast(y_pred, 'float32')
    loss = K.square(y_true - y_pred) + K.prod(y_true, y_pred)
    loss = loss * K.cast(sample_weights, 'float32')
    return loss

When I run model.fit, it fails on TypeError:

earlystopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', 
                                            mode='min', verbose=1, patience=20)
history = model.fit(Xtrain, ytrain_raw, 
                    validation_data=(Xval, yval_raw), batch_size=128,
                    epochs=500, verbose=1, callbacks=[earlystopping],
                    sample_weight=sample_weights)

Error:

TypeError: in user code:

    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function  *
        outputs = self.distribute_strategy.run(
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
        return fn(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:533 train_step  **
        y, y_pred, sample_weight, regularization_losses=self.losses)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:205 __call__
        loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:143 __call__
        losses = self.call(y_true, y_pred)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:246 call
        return self.fn(y_true, y_pred, **self._fn_kwargs)
    <ipython-input-477-99f75f332877>:4 customLoss
        loss = K.square(y_true - y_pred) + K.prod(y_true, y_pred)
    /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1716 prod
        return tf.reduce_prod(x, axis, keepdims)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:180 wrapper
        return target(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:2196 reduce_prod
        name=name))
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py:6642 prod
        name=name)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:578 _apply_op_helper
        param_name=input_name)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:61 _SatisfiesTypeConstraint
        ", ".join(dtypes.as_dtype(x).name for x in allowed_list)))

    TypeError: Value passed to parameter 'reduction_indices' has DataType float32 not in list of allowed values: int32, int64

However, if I remove the K.prod(y_true, y_pred) part, the code runs without any hitches.

def customLoss(y_true, y_pred, sample_weight=None):
    y_true = K.cast(y_true, 'float32')
    y_pred = K.cast(y_pred, 'float32')
    loss = K.square(y_true - y_pred) #+ K.prod(y_true, y_pred)
    loss = loss * K.cast(sample_weights, 'float32')
    return loss

What could be wrong???

Upvotes: 2

Views: 762

Answers (1)

Homer
Homer

Reputation: 408

I believe the error arises from the second argument in your call for K.prod(). This function takes a single tensor x but you have specified two tensors y_true and y_pred.

The error itself arises because the second argument of K.prod() refers to an axis, which must be an integer, not a float.

It sounds like you may want to use tf.keras.layers.multiply() or tf.keras.layers.dot().

Upvotes: 1

Related Questions