Reputation: 1333
I have a code where I need to operate a lot of multiplications between matrices. The code is meant to be used for 2D matrices of arbitrary dimension n, which in principle could be very large, making the program very slow. So far, in order to operate the multiplications, I have always used np.dot, as in the following example
def getV(csi, e, e2, k):
ktrans = k.transpose()
v = np.dot(csi, ktrans)
v = np.dot(v, e)
v = np.dot(v, k)
v = np.dot(v, csi)
v = np.dot(v, ktrans)
e2trans = e2.transpose()
v = np.dot(v, e2trans)
v = np.dot(v, k)
traceV = 2*v.trace()
return traceV
where the output should be twice the trace of the product:
csi*ktrans*e*k*csi*ktrans*e2trans*k
(they are all matrices multiplied together). I am sure there is a faster way to make such a long product, possibly in one passage. Can someone explain how? I have tried but it seems that np.dot always needs just two matrices at any single passages.
Upvotes: 1
Views: 452
Reputation:
Because of the properties of the trace this computation can be rewritten as follows, which reduces the number of matrix multiplications from 7 to 4:
def getV(csi, k, e, e2):
temp = k.dot(csi).dot(k.T)
trace_ = (temp.dot(e).dot(temp) * e2).sum()
return 2 * trace_
Depending on your current setup, you could also try installing a different BLAS library or computing this on the graphics card instead of the CPU.
Upvotes: 3