hundekopfjaeger
hundekopfjaeger

Reputation: 21

Difference in scalar product with matlab and python

I have a problem. I have two arrays with the size of 82248x20 and if i do the following in Matlab

A=X'*Y

it will give me 6.152847328855238e-18 for the second value. If i do it in Python with anything like

test=scipy.io.loadmat('wohin.mat')
X=test['X']
Y=test['Y']
A=np.transpose(X)@Y
A=np.dot(np.transpose(X),Y)
A=np.matmul(np.transpose(X),Y)

i get the value 1.9233746539892849e-16 for the second value and if i do the calculation with

for i in range(0,82248):
    t=t+np.transpose(Y)[0,i]*X[i,1]

i get 3.3664996263355106e-15 for the second value of row one. So where is my misunderstanding or the difference between the three methods. The last one has some rounding errors perhaps, but the two other ones should give me the same result or not?

Mat file with the matrices is here

Upvotes: 2

Views: 76

Answers (1)

jodag
jodag

Reputation: 22204

The two matrices X and Y are identical matrices with columns forming what appears to be an orthonormal basis. Therefore, you should expect transpose(X)*Y to be an identity matrix. All the off-diagonal elements should be zero and only differ from zero due to rounding errors.

That said, the differences you observe simply imply that the various implementations of matrix multiplication differ from each other in some ways. For example, the order in which sums are taken can affect the final result.

Example (MATLAB):

>> sum(X(:,1).*Y(:,2))
ans =
   3.366499626335511e-15
>> sum(flipud(X(:,1)).*flipud(Y(:,2)))
ans =
   3.366880519846534e-15

In this example we manually take the inner product of two large orthogonal vectors. By flipping the vectors the result shouldn't change, however, due to rounding errors we get slightly different results.

Upvotes: 1

Related Questions