Reputation: 2079
I have written the below custom loss function, where I need to create a factor by dividing the input shape with the output shape.
def distance_loss(x,y):
x_shape = K.int_shape(x)[1]
y_shape = K.int_shape(y)[1]
print(x_shape,y_shape)
factor = x_shape/y_shape
loss = tf.sqrt(factor) * tf.norm(x-y)
return tf.math.abs(loss)
This is the model architecture is:
model = Sequential()
model.add(Dense(32,input_dim=4))
model.add(Dense(64,activation='relu'))
model.add(Dense(128,activation='relu'))
model.add(Dense(64,activation='relu'))
model.add(Dense(2,activation='relu'))
opt = Adam(lr = 0.001)
model.compile(optimizer = opt, loss=distance_loss,metrics=['accuracy'])
When I ran the model.compile
line. The custom loss prints
None 2
and throws an error
TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
I read that the input shape of the training data is only known during the training phase. Is there any way to bypass this issue?
Upvotes: 1
Views: 76
Reputation: 59731
Use K.shape
instead:
def distance_loss(x,y):
x_shape = K.shape(x)[1]
y_shape = K.shape(y)[1]
factor = K.cast(x_shape, x.dtype) / K.cast(y_shape, y.dtype)
loss = tf.sqrt(factor) * tf.norm(x-y)
return tf.math.abs(loss)
Upvotes: 1