Reputation: 4679
I have 2 tensors
a = torch.tensor([1,2])
b = torch.tensor([[[10,20],
[30,40]],
[[1,2],
[3,4]]])
and I would like to combine them in such a way that
a ? b = tensor([[[10,20],
[30,40]],
[[ 2, 4],
[ 6, 8]]])
(and then sum over the 0th dimension, in the end I want to do a weighted sum)
I've tried:
""" no idea how to interpret that """
a @ b
tensor([[ 70, 100],
[ 7, 10]])
b @ a
tensor([[ 50, 110],
[ 5, 11]])
for i in range(b.size()[0]): # works but I don't think this will work with autograd
b[i] *= a[i]
a * b # multiplies right side by 2
tensor([[[10, 40],
[30, 80]],
[[ 1, 4],
[ 3, 8]]])
a.unsqueeze(1) # multiplies bottom side by 2
tensor([[[10, 20],
[60, 80]],
[[ 1, 2],
[ 6, 8]]])
a.unsqueeze(2) * b # dimension out of range
Upvotes: 0
Views: 779
Reputation: 8699
You can also try below code:
a = torch.tensor([1,2])
b = torch.tensor([[[10,20],
[30,40]],
[[1,2],
[3,4]]])
print((a.view(-1, 1)*torch.flatten(b, 1)).view(b.shape))
output:
tensor([[[10, 20],
[30, 40]],
[[ 2, 4],
[ 6, 8]]])
Here, we are basically carrying out below steps:
a
to a 2d tensor of size [a.shape[0],1]
, i.e. [2, 1]
in above case.torch.flatten()
to flatten tensor b
starting from the first dimension(i.e. start_dim=1
). Here, end_dim=-1
by default. Resultant size is [2, 4]
.b
, i.e [2, 2, 2]
.Upvotes: 1
Reputation: 177
Interesting -- I tried a few different broadcasting tricks and didn't see any obvious wins, so the simple version:
b[0] *= a[0]
b[1] *= a[1]
c = b
Upvotes: 0