Kirk Broadhurst
Kirk Broadhurst

Reputation: 28718

Can I calculate element-wise product using an np.matrix?

I know I can do matrix multiplication using numpy arrays by using the .dot syntax. The regular * multiplication does element-wise multiplication.

a = np.array([[1,2],[3,4]])
print 'matrix multiplication', a.dot(a)
print 'element-wise multiplication', a * a

> matrix multiplication [[ 7 10]  [15 22]] 
> element-wise multiplication [[ 1  4]  [ 9 16]] 

This works fine, but it's the opposite of all matrix operations I've ever learnt (i.e. the "dot product" is typically element-wise, and the regular product is typically a full matrix multiplication.)

So I'm investigating np.matrix. The nice thing is that matrix multiplication uses the * operator, but I'm to understand how to do element-wise multiplication.

m = np.matrix(a)
print 'matrix multiplication', m * m
print 'more matrix multiplication? ', m.dot(m)

> matrix multiplication [[ 7 10]  [15 22]] 
> more matrix multiplication?  [[ 7 10]  [15 22]]

I understand what's happening - there is not .dot operator for numpy matrix, so it falls through to the base np.array implementation. But does this mean that there's no way to calculate a dot product using np.matrix?

Is this just another argument for avoiding np.matrix and instead sticking with np.array?

Upvotes: 0

Views: 2307

Answers (2)

hpaulj
hpaulj

Reputation: 231385

The np.dot is consistent with the dot product (also called scalar product) for vectors

In [125]: np.arange(10).dot(np.arange(1,11))
Out[125]: 330

But np.dot is generalized to work with 2 (and higher) dimensional arrays.

MATLAB was built, from the start, on 2d matrices, and matrix product was seen as the most common and basic multiplication. So the .* notation was used for element wise multiplication. The . can also be used with + and other operators.

The basic structure in numpy is a n-d array. Since such arrays can have 0,1,2 or more dimensions, the math operators were designed to work element wise. np.dot was provided to handle the matrix product. There's a variation called np.tensordot. np.einsum uses Einstein notation (popular in physics). A new @ operator invokes the np.matmul function

In [131]: a.dot(a)
Out[131]: 
array([[ 7, 10],
       [15, 22]])
In [134]: np.einsum('ij,jk->ik',a,a)
Out[134]: 
array([[ 7, 10],
       [15, 22]])
In [135]: a@a
Out[135]: 
array([[ 7, 10],
       [15, 22]])
In [136]: np.matmul(a,a)
Out[136]: 
array([[ 7, 10],
       [15, 22]])

np.matrix is an ndarray subclass, and was added to make numpy more familiar to wayward MATLAB users. Like old versions of MATLAB it can only be 2d. So results of matrix calculations will always be 2d (or scalar). It's use is generally discouraged, though I'm sure it will be around for a long time. (I use the sparse matrix more than np.matrix).

With the addition of the @ operator there's one less reason to use np.matrix.

In [149]: m=np.matrix(a)
In [150]: m*m
Out[150]: 
matrix([[ 7, 10],
        [15, 22]])
In [151]: m@m
Out[151]: 
matrix([[ 7, 10],
        [15, 22]])
In [152]: m*m*m
Out[152]: 
matrix([[ 37,  54],
        [ 81, 118]])
In [153]: a@a@a
Out[153]: 
array([[ 37,  54],
       [ 81, 118]])

Upvotes: 2

BrenBarn
BrenBarn

Reputation: 251383

You can get elementwise multiplication with the multiply function:

>>> np.multiply(m, m)
matrix([[ 1,  4],
        [ 9, 16]])

The result is the same for np.multiply(a, a).

The name dot is indeed somewhat misleading, but the documentation for np.dot clearly says: "For 2-D arrays it is equivalent to matrix multiplication". Strictly speaking, the dot product is not defined for matrices; elementwise multiplication would be Frobenius inner product.

Upvotes: 2

Related Questions