JDiMatteo
JDiMatteo

Reputation: 13150

Python Optimized Most Cosine Similar Vector

I have about 30,000 vectors and each vector has about 300 elements.

For another vector (with same number elements), how can I efficiently find the most (cosine) similar vector?

This following is one implementation using a python loop:

from time import time
import numpy as np

vectors = np.load("np_array_of_about_30000_vectors.npy")
target = np.load("single_vector.npy")
print vectors.shape, vectors.dtype  # (35196, 312) float3
print target.shape, target.dtype  # (312,) float32

start_time = time()
for i, candidate in enumerate(vectors):
    similarity = np.dot(candidate, target)/(np.linalg.norm(candidate)*np.linalg.norm(target))
    if similarity > max_similarity: 
        max_similarity = similarity 
        max_index = i
print "done with loop in %s seconds" % (time() - start_time)  # 0.466356039047 seconds
print "Most similar vector to target is index %s with %s" % (max_index, max_similarity)  #  index 2399 with 0.772758982696

The following with removed python loop is 44x faster, but isn't the same computation:

print "starting max dot"
start_time = time()
print(np.max(np.dot(vectors, target)))
print "done with max dot in %s seconds" % (time() - start_time)  # 0.0105748176575 seconds

Is there a way to get this speedup associated with numpy doing the iterations without loosing the max index logic and the division of the normal product? For optimizing calculations like this, would it make sense to just do the calculations in C?

Upvotes: 6

Views: 7270

Answers (3)

milan.vancl
milan.vancl

Reputation: 31

Regarding my measurements cosine_similarity from sklearn is the most optimized algorithm.

import time
import numpy
from scipy.spatial.distance import cdist
from sklearn.metrics.pairwise import cosine_similarity

target = numpy.random.rand(100,300)
vectors = numpy.random.rand(10000,300)

start = time.time()
most_similar_sklearn = cosine_similarity(target, vectors)
print("Sklearn cosine_similarity: {} s".format(time.time()-start))
start = time.time()
most_similar_scipy = 1-cdist(target, vectors, 'cosine')
print("Scipy cdist: {} s".format(time.time()-start))
equals = numpy.allclose(most_similar_sklearn, most_similar_scipy)
print("Equal results: {}".format(equals))

Sklearn cosine_similarity: 0.05303549766540527 s
Scipy cdist: 0.44914913177490234 s
Equal results: True

You can even get such results using just numpy with matrix multiplication as cosine similarity is defined as a dot product devided by norm. However, it requires some preprocessing so the matmul is feasible.

import time
import numpy
from sklearn.metrics.pairwise import cosine_similarity

target = numpy.random.rand(100,300)
vectors = numpy.random.rand(10000,300)

most_similar_sklearn = cosine_similarity(target, vectors)

start = time.time()

t_ext = target.reshape((100,300, 1))
v_ext = vectors.T.reshape((1,300,10000))
t_norm = numpy.linalg.norm(t_ext, axis=1)
v_norm = numpy.linalg.norm(v_ext, axis=1)
norm = t_norm @ v_norm
dat = target @ vectors.T
most_similar_numpy = dat / norm

print("Numpy matmul: {} s".format(time.time()-start))
equals = numpy.allclose(most_similar_sklearn, most_similar_numpy)
print("Equal results: {}".format(equals))

Numpy matmul: 0.055016279220581055 s
Equal results: True

Upvotes: 2

Deepak Saini
Deepak Saini

Reputation: 2910

You have the correct idea about avoiding the loop to get performance. You can use argmin to get the minimum distance index.

Though, I would change the distance calculation to scipy cdist as well. This way you can calculate distances to multiple targets and would be able to choose from several distance metrics, if need be.

import numpy as np
from scipy.spatial import distance

distances = distance.cdist([target], vectors, "cosine")[0]
min_index = np.argmin(distances)
min_distance = distances[min_index]
max_similarity = 1 - min_distance

HTH.

Upvotes: 6

pangyuteng
pangyuteng

Reputation: 1839

Edit: Hats off to @Deepak. cdist is the fastest, if you do need the actual computed value.

from scipy.spatial import distance

start_time = time()
distances = distance.cdist([target], vectors, "cosine")[0]
min_index = np.argmin(distances)
min_distance = distances[min_index]
print("done with loop in %s seconds" % (time() - start_time))
max_index = np.argmax(out)
print("Most similar vector to target is index %s with %s" % (max_index, max_similarity))

done with loop in 0.013602018356323242 seconds

Most similar vector to target is index 11001 with 0.2250217098612361


from time import time
import numpy as np

vectors = np.random.normal(0,100,(35196,300))
target = np.random.normal(0,100,(300))

start_time = time()
myvals = np.dot(vectors, target)
max_index = np.argmax(myvals)
max_similarity = myvals[max_index]
print("done with max dot in %s seconds" % (time() - start_time) )
print("Most similar vector to target is index %s with %s" % (max_index, max_similarity))

done with max dot in 0.009701013565063477 seconds

Most similar vector to target is index 12187 with 645549.917200941

max_similarity = 1e-10
start_time = time()
for i, candidate in enumerate(vectors):
    similarity = np.dot(candidate, target)/(np.linalg.norm(candidate)*np.linalg.norm(target))
    if similarity > max_similarity: 
        max_similarity = similarity 
        max_index = i
print("done with loop in %s seconds" % (time() - start_time))
print("Most similar vector to target is index %s with %s" % (max_index, max_similarity))

done with loop in 0.49567198753356934 seconds

Most similar vector to target is index 11001 with 0.2250217098612361

def my_func(candidate,target):
    return np.dot(candidate, target)/(np.linalg.norm(candidate)*np.linalg.norm(target))
start_time = time()
out = np.apply_along_axis(my_func, 1, vectors,target)
print("done with loop in %s seconds" % (time() - start_time))
max_index = np.argmax(out)
print("Most similar vector to target is index %s with %s" % (max_index, max_similarity))

done with loop in 0.7495708465576172 seconds

Most similar vector to target is index 11001 with 0.2250217098612361

start_time = time()
vnorm = np.linalg.norm(vectors,axis=1)
tnorm = np.linalg.norm(target)
tnorm = np.ones(vnorm.shape)
out = np.matmul(vectors,target)/(vnorm*tnorm)
print("done with loop in %s seconds" % (time() - start_time))
max_index = np.argmax(out)
print("Most similar vector to target is index %s with %s" % (max_index, max_similarity))

done with loop in 0.04306602478027344 seconds

Most similar vector to target is index 11001 with 0.2250217098612361

Upvotes: 4

Related Questions