Jingnan Jia
Jingnan Jia

Reputation: 1309

What is the difference between "tensorflow.math.multiply" and "tensorflow.keras.layers.multiply"?

What is the difference between tensorflow.math.multiply and tensorflow.keras.layers.multiply?

Similarly, What is the difference between tensorflow.math.add and tensorflow.keras.layers.add?

The reason why I have this question is that when I build my own customized loss function and metrics product = multiply([y_true_f, y_pred_f]), if I use from tensorflow.keras.layers import multiply, errors occured:

tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'main_target' with dtype float and shape [?,?,?]

But if I use from tensorflow.math import multiply, it works normally.

I want to know why. Thanks. (I use tensorflow 1.15, ubuntu18)

Update: After import tensorflow.kera.backend as K

what is the difference between tf.multiply and .*?

Similarly, What is the difference between K.pow and /??

I write the following codes of a customized metrics function based on some other's codes.

def dice_coef_weight_sub(y_true, y_pred):
    """
    Returns the product of dice coefficient for each class
    """
    y_true_f = (Lambda(lambda y_true: y_true[:, :, :, :, 0:])(y_true))
    y_pred_f = (Lambda(lambda y_pred: y_pred[:, :, :, :, 0:])(y_pred))

    product = tf.multiply([y_true_f, y_pred_f]) # multiply should be import from tf or tf.math

    red_y_true = K.sum(y_true_f, axis=[0, 1, 2, 3]) # shape [None, nb_class]
    red_y_pred = K.sum(y_pred_f, axis=[0, 1, 2, 3])
    red_product = K.sum(product, axis=[0, 1, 2, 3])

    smooth = 0.001
    dices = (2. * red_product + smooth) / (red_y_true + red_y_pred + smooth)

    ratio = red_y_true / (K.sum(red_y_true) + smooth)
    ratio = 1.0 - ratio
    # ratio =  K.pow(ratio + smooth, -1.0) # different method to get ratio

    return K.sum(multiply([dices, ratio]))

In the codes, can I replace tf.multiply by .*? Can I replace K.pow by /??

(From tensorflow's document, I know the difference between tf.pow and K.pow: tf.pow(x,y) receives 2 tensors to compute x^y for corresponding elements in x and y, while K.pow(x,a) receives a tensor x and a integer a to compute x^a. But I do not know why in the above code K.pow receives a float number 1.0 and it still works norally)

Upvotes: 2

Views: 1087

Answers (1)

Dr. Snoopy
Dr. Snoopy

Reputation: 56377

All classes in tensorflow.keras.layers are Keras layers, meaning that they take as input Keras tensors produced by other layers (in the functional API), or they can be arranged to make sequential models using the Sequential API.

Other tensorflow functions like the ones in tensorflow.math are meant to operate in tensorflow (not keras) tensors. For a custom loss, the inputs and outputs are tensorflow tensors, so you should use tensorflow functions and not Keras layers.

Keras layer operations are used when you want to perform such operation as part of a neural network architecture, for example the add layer is used to implement residual connections in a ResNet architecture.

Upvotes: 3

Related Questions