Reputation: 176
My goal is finding the closest Segment (in an array of segments) to a single point. Getting the dot product between arrays of 2D coordinates work, but using 3D coordinates gives the following error:
*ValueError: matmul: Input operand 1 has a mismatch in its core dimension 0, with gufunc signature (n?,k),(k,m?)->(n?,m?) (size 2 is different from 3)*
A = np.array([[1,1,1],[2,2,2]])
B = np.array([[3,3,3], [4,4,4]])
dp = np.dot(A,B)
dp
should return 2 values,
The dot product of [1,1,1]@[3,3,3]
and [2,2,2]@[4,4,4]
// Thanks everyone.
Here is the final solution to find the closest line segment to a single point.
Any optimization is welcome.
import numpy as np
import time
#find closest segment to single point
then = time.time()
#random line segment
l1 = np.random.rand(1000000, 3)*10
l2 = np.random.rand(1000000, 3)*10
#single point
p = np.array([5,5,5]) #only single point
#set to origin
line = l2-l1
pv = p-l1
#length of line squared
len_sq = np.sum(line**2, axis = 1) #len_sq = numpy.einsum("ij,ij->i", line, line)
#dot product of 3D vectors with einsum
dot = np.einsum('ij,ij->i',line,pv) #np.sum(line*pv,axis=1)
#percentage of line the pv vector travels in
param = np.array([dot/len_sq])
#param<0 projected point=l1, param>1 pp=l2
clamped_param = np.clip(param,0,1)
#add line fraction to l1 to get projected point
pp = l1+(clamped_param.T*line)
##distance vector between single point and projected point
pp_p = pp-p
#sort by smallest distance between projected point and l1
index_of_mininum_dist = np.sum(pp_p**2, axis = 1).argmin()
print(index_of_mininum_dist)
print("FINISHED IN: ", time.time()-then)
Upvotes: 2
Views: 5382
Reputation: 231665
In [265]: A = np.array([[1,1,1],[2,2,2]])
...: B = np.array([[3,3,3], [4,4,4]])
Element wise multiplication followed by sum works fine:
In [266]: np.sum(A*B, axis=1)
Out[266]: array([ 9, 24])
einsum
also makes expressing this easy:
In [267]: np.einsum('ij,ij->i',A,B)
Out[267]: array([ 9, 24])
dot
with 2d arrays (here (2,3) shaped), performs matrix multiplication, the classic across rows, down columns. In einsum
notation this is 'ij,jk->ik'.
In [268]: np.dot(A,B)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-268-189f80e2c351> in <module>
----> 1 np.dot(A,B)
<__array_function__ internals> in dot(*args, **kwargs)
ValueError: shapes (2,3) and (2,3) not aligned: 3 (dim 1) != 2 (dim 0)
With a transpose, dimensions match (2,3) with (3,2),but the result is (2,2):
In [269]: np.dot(A,B.T)
Out[269]:
array([[ 9, 12],
[18, 24]])
The desired values are on the diagonal.
One way to think of the problem is that we want to do a batch of 1d products. matmul/@
was added to perform batch matrix multiplication (which dot
can't do). But the arrays have to be expanded to 3d, so the batch dimension is the leading one (and the 3 is on the respective last and 2nd to the last dimensions):
In [270]: A[:,None,:]@B[:,:,None] # (2,1,3) with (2,3,1)
Out[270]:
array([[[ 9]],
[[24]]])
But the result is (2,1,1) shaped. The right numbers are there, but we have to squeeze out the extra dimensions.
Overall then the first 2 solutions are simplest - sum or product or einsum
equivalent.
Upvotes: 1
Reputation: 12417
Do you mean this:
np.einsum('ij,ij->i',A,B)
output:
[ 9 24]
However, if you want the dot product of every row in A with every row in B, you should do:
A@B.T
output:
[[ 9 12]
[18 24]]
Upvotes: 2
Reputation: 685
np.dot works only on vectors, not matrices. When passing matrices it expects to do a matrix multiplication, which will fail because of the dimensions passed.
On a vector it will work like you expected:
np.dot(A[0,:],B[0,:])
np.dot(A[1,:],B[1,:])
To do it in one go:
np.sum(A*B,axis=1)
Upvotes: 5
Reputation: 334
The dot product is numpy is not designed to be used with arrays apparently. It's pretty easy to write some wrapper around it. Like this for example:
def array_dot(A, B):
return [A[i]@B[i] for i in range(A.shape[0])]
Upvotes: 1