Reputation: 986
x is N by M matrix.
y is 1 by L vector.
I want to return "outer product" between x and y, let's call it z.
z[n,m,l] = x[n,m] * y[l]
I could probably do this using einsum.
np.einsum("ij,k->ijk", x[:, :, k], y[:, k])
or reshape afterwards.
np.outer(x[:, :, k], y).reshape((x.shape[0],x.shape[1],y.shape[0]))
But I'm thinking of doing this in np.outer only or something seems simpler, memory efficient.
Is there a way?
Upvotes: 1
Views: 812
Reputation: 231550
The code for outer
is:
multiply(a.ravel()[:, newaxis], b.ravel()[newaxis, :], out)
As its docs says, it flattens (i.e. ravel
). If the arrays are already 1d, that expression could be written as
a[:,None] * b[None,:]
a[:,None] * b # broadcasting auto adds the None to b
We could apply broadcasting rules to your (n,m)*(1,l):
In [2]: x = np.arange(12).reshape(3,4); y = np.array([[1,2]])
In [3]: x.shape, y.shape
Out[3]: ((3, 4), (1, 2))
You want a (n,m,l), which a (n,m,1) * (1,1,l)
achieves. We need to add a trailing dimension to x
. The extra leading 1 on y
is automatic:
In [4]: z = x[...,None]*y
In [5]: z.shape
Out[5]: (3, 4, 2)
In [6]: z
Out[6]:
array([[[ 0, 0],
[ 1, 2],
[ 2, 4],
[ 3, 6]],
[[ 4, 8],
[ 5, 10],
[ 6, 12],
[ 7, 14]],
[[ 8, 16],
[ 9, 18],
[10, 20],
[11, 22]]])
Using einsum
:
In [8]: np.einsum('nm,kl->nml', x, y).shape
Out[8]: (3, 4, 2)
The fact that you approved:
In [9]: np.multiply.outer(x,y).shape
Out[9]: (3, 4, 1, 2)
suggests y
isn't really (1,l)
but rather (l,)`. Adjust for either is easy.
I don't think there's much difference in memory efficiency among these. In this small example In[4]
is fastest, but not by much.
Upvotes: 0
Reputation: 53089
It's one of those numpy
"can't know unless you happen to know" bits: np.outer
flattens multidimensional inputs while np.multiply.outer
doesn't:
m,n,l = 3,4,5
x = np.arange(m*n).reshape(m,n)
y = np.arange(l)
np.multiply.outer(x,y).shape
# (3, 4, 5)
Upvotes: 4