Reputation: 70
I want to implement a custom loss which can be calculated by using each sample. Calculation of the loss is a little complicated and requires me to use an external python file for this (or one can assume that we give the inputs to a function).
How can I implement this?
Is it possible to use @tf.function
annotation and make it a graph?
This is how it is supposed to look
def loss(input,output):
loss = 0
for x, y in zip(input, output):
sim = Class(x)
a = sim.GetA()
b = sim.GetB()
loss = loss + np.linalg.norm(np.dot(a,b)+y)
return loss
Upvotes: 1
Views: 190
Reputation: 70
An implementation of the same via PyTorch was possible as it supports dynamic computational graph
Upvotes: 1