Reputation: 713
I want to compute a loss function which uses output of the network twice on different inputs. For example as hypothetically,
first_output = model(first_input)
second_output = model(second_input)
loss = mean_absolute_error(first_output, second_output)
How to achieve this in tensorflow or keras?
Update: Thank you guys for replies. I want to reimplement this paper in keras or tensorflow. As explained in it, "critic" network which is discriminator in GAN has two inputs and run through them one by one and compute loss function depending on outputs and compute gradient. Main problem is how to make possible in tensorflow or keras?
Upvotes: 2
Views: 546
Reputation: 12938
You could try using keras.layers.merge
. I have used this before to make Siamese networks with something like:
first_output = model(first_input)
second_output = model(second_input)
mae = lambda x: mean_absolute_error(x[0], x[1])
distance = merge(inputs=[first_output, second_output],
mode=mae,
output_shape=lambda x: x[0],
name='mean_absolute_error')
For the Siamese network example, you'd then typically make some prediction on this distance measure with something like:
prediction = Dense(2, activation='softmax', name='prediction')(distance)
model = Model([first_input, second_input], prediction, name='siamese_net')
model.compile(optimizer=Adam(),
loss=some_loss_function)
using keras.models.Model
and keras.layers.Dense
for that example.
Note that keras.layers.merge
is (I believe) deprecated in the latest versions of Keras, which is really a shame. I think to do something similar with the most modern Keras you would need to use keras.layers.Concatenate
to combine the two results followed by keras.layers.Lambda
to apply the function.
Upvotes: 1