Reputation: 1063
Suppose you have n square matrices A1,...,An. Is there anyway to multiply these matrices in a neat way? As far as I know dot in numpy accepts only two arguments. One obvious way is to define a function to call itself and get the result. Is there any better way to get it done?
Upvotes: 59
Views: 97271
Reputation: 27
This works in VS Code for two matrices
import numpy as np #
def matrix_multiply(matrix1, matrix2):
print(f"Matrix A:\n {A}\n")#Print the Matrix contents
print(f"Matrix B:\n {B}\n")
if A.shape[1] == B.shape[0]:#Check if matrices can be multiplied
C = np.matmul(A,B) #Use matmul to multiply the matrices
return C #Return the resulting matrix
else:
return "Sorry, cannot multiply A and B."#Error catching
# Use np to generate dataset
np.random.seed(27)
A = np.random.randint(1,10,size = (5,4))
B = np.random.randint(1,10,size = (4,2))
# Testing the function
result= matrix_multiply(A,B) #Call matrix_multiply to find answer
print(result)
#References:
#https://geekflare.com/multiply-matrices-in-python/#geekflare-toc-use-python-nested-list-comprehension-to-multiply-matrices
#https://www.anaconda.com/download
#Launch VS Code from Anaconda
Upvotes: 0
Reputation: 17847
Resurrecting an old question with an update:
As of November 13, 2014 there is now a np.linalg.multi_dot
function which does exactly what you want. It also has the benefit of optimizing call order, though that isn't necessary in your case.
Note that this available starting with numpy version 1.10.
Upvotes: 54
Reputation: 4434
Another way to achieve this would be using einsum
, which implements the Einstein summation convention for NumPy.
To very briefly explain this convention with respect to this problem: When you write down your multiple matrix product as one big sum of products, you get something like:
P_im = sum_j sum_k sum_l A1_ij A2_jk A3_kl A4_lm
where P
is the result of your product and A1
, A2
, A3
, and A4
are the input matrices. Note that you sum over exactly those indices that appear twice in the summand, namely j
, k
, and l
. As a sum with this property often appears in physics, vector calculus, and probably some other fields, there is a NumPy tool for it, namely einsum
.
In the above example, you can use it to calculate your matrix product as follows:
P = np.einsum( "ij,jk,kl,lm", A1, A2, A3, A4 )
Here, the first argument tells the function which indices to apply to the argument matrices and then all doubly appearing indices are summed over, yielding the desired result.
Note that the computational efficiency depends on several factors (so you are probably best off with just testing it):
Upvotes: 4
Reputation: 25833
This might be a relatively recent feature, but I like:
A.dot(B).dot(C)
or if you had a long chain you could do:
reduce(numpy.dot, [A1, A2, ..., An])
Update:
There is more info about reduce here. Here is an example that might help.
>>> A = [np.random.random((5, 5)) for i in xrange(4)]
>>> product1 = A[0].dot(A[1]).dot(A[2]).dot(A[3])
>>> product2 = reduce(numpy.dot, A)
>>> numpy.all(product1 == product2)
True
Update 2016:
As of python 3.5, there is a new matrix_multiply symbol, @
:
R = A @ B @ C
Upvotes: 89
Reputation: 6797
A_list = [np.random.randn(100, 100) for i in xrange(10)]
B = np.eye(A_list[0].shape[0])
for A in A_list:
B = np.dot(B, A)
C = reduce(np.dot, A_list)
assert(B == C)
Upvotes: 2
Reputation: 10375
If you compute all the matrices a priori then you should use an optimization scheme for matrix chain multiplication. See this Wikipedia article.
Upvotes: 5