Reputation: 1936
I have a piece of code like this:
a = Y[0]; b = Z[0]
print(a, b)
loss = 0
for i in range(len(a)):
k = len(a)-i
loss += (2**(k-1))*np.abs(a[i]-b[i])
print(loss)
Where Y
and Z
are of dimensions 250 x 10
and each row is 10 bit binary value. For example, print(a,b)
prints this: [1 0 0 0 0 0 0 0 1 0] [0 0 0 1 1 1 1 1 0 0]
Now I want to apply the two line function inside the for loop for corresponding rows between Y
and Z
. But I don't want to do something like this:
for j in range(Y.shape[0]):
a = Y[j]; b = Z[j]
loss = 0
for i in range(len(a)):
k = len(a)-i
loss += (2**(k-1))*np.abs(a[i]-b[i])
print(loss)
I am essentially trying to make a custom loss function in keras/tensorflow. And that for loop example doesn't scale for large tensor operations. How do I do it with some sort of batch matrix operation instead of for loops?
Upvotes: 3
Views: 159
Reputation: 3722
You could do this:
factor = 2**np.arange(Y.shape[1])[::-1]
loss = np.sum(factor * np.abs(Y-Z), axis=-1)
Upvotes: 1
Reputation: 1932
If only the inner loop needs to be made numpy parallelized:
import numpy as np
for j in range(Y.shape[0]):
a = Y[j]; b = Z[j]
loss = 0
"""
for i in range(len(a)):
k = len(a)-i
loss += (2**(k-1))*np.abs(a[i]-b[i])
"""
k = np.arange(len(a), 0, -1)
loss = np.sum(np.multiply(2**(k-1), np.abs(a-b)))
print(loss)
EDIT
To make it even more numpy parallelized, use the following approach:
import numpy as np
# This function computes loss for row pairs
def get_loss(row, sz):
loss = 0
k = np.arange(sz, 0, -1)
loss = np.sum(np.multiply(2**(k-1), np.abs(row[:sz]-row[sz:])))
return loss
# Sample input matrices
A = np.random.random((5, 10))
B = np.random.random((5, 10))
# Concatenate the input matrices
AB = np.concatenate((A, B), axis=1)
# apply the function on each row pair
result = np.apply_along_axis(get_loss, 1, AB, A.shape[1])
# result is a 1D array of the losses
print(result.shape)
Upvotes: 1