Reputation: 37
I'm currently trying to implement my own loss function.
I have three tensors.
A [batch, row, col, keypoints] # Actual Values
B [batch, row, col, keypoints] # Predicted Values
C [batch, keypoints_mask] # Mask
keypoints_mask is either 1 or 0. I want to treat the tensors as arrays and do scalar multiplication of the last dimension.
E.g something like this:
A [5, 100, 100, 10]
B [5, 100, 100, 10]
C [5, 10]
A[-1][0] = A[-1][0] * C[-1][0]
A[-1][1] = A[-1][1] * C[-1][1]
...
B[-1][0] = B[-1][0] * C[-1][0]
B[-1][1] = B[-1][1] * C[-1][1]
...
Loss = Mean_Squared_Error(A, B)
What would be the best approach to do implement this?
Edit:
The data is an image, where for every pixel I have 10 values.
Psuedo Code
for b in batch:
for r in row:
for c in col:
for i in enumerate(keypoints):
A[b, r, c, i] = A[b, r, c, i] * C[b, i]
B[b, r, c, i] = B[b, r, c, i] * C[b, i]
Upvotes: 0
Views: 260
Reputation: 37
This is what I ended up doing and it seems to work for now.
A [5, 100, 100, 10] # Actual
B [5, 100, 100, 10] # Predicted
C [5, 10] # Mask
Loss = A - B
Loss = Loss * Loss
Loss = tf.reduce_mean(Loss, [1,2]) # [5, 100, 100, 10] -> [5, 10]
Loss = Loss * C
Upvotes: 1