Reputation: 684
I need tensor mode n product. The defination of tenosr mode n product can be seen here. https://www.alexejgossmann.com/tensor_decomposition_tucker/
I found python code. I would like to convert this code into julia.
def mode_n_product(x, m, mode):
x = np.asarray(x)
m = np.asarray(m)
if mode <= 0 or mode % 1 != 0:
raise ValueError('`mode` must be a positive interger')
if x.ndim < mode:
raise ValueError('Invalid shape of X for mode = {}: {}'.format(mode, x.shape))
if m.ndim != 2:
raise ValueError('Invalid shape of M: {}'.format(m.shape))
return np.swapaxes(np.swapaxes(x, mode - 1, -1).dot(m.T), mode - 1, -1)
I have found another answer using Tensortoolbox.jl
using TensorToolbox
X=rand(5,4,3);
A=rand(2,5);
ttm(X,A,n) #X times A[1] by mode n
Upvotes: 1
Views: 634
Reputation: 2580
One way is:
using TensorOperations
@tensor y[i1, i2, i3, out, i5] := x[i1, i2, i3, s, i5] * a[out, s]
This is literally the formula given at your link to define this, except that I changed the name of the summed index to s
; you can you any index names you like, they are just markers. The sum is implicit, because s
does not appear on the left.
There is nothing very special about putting the index out
back in the same place. Like your python code, @tensor
permutes the dimensions of x
in order to use ordinary matrix multiplication, and then permutes again to give y
the requested order. The fewer permutations needed, the faster this will be.
Alternatively, you can try using LoopVectorization, Tullio; @tullio y[i1, i2, ...
with the same notation. Instead of permuting in order to call a library matrix multiplication function, this writes a pure-Julia version which works with the array as it arrives.
Upvotes: 2