Reputation: 21662
I need to rewrite this code using numexpr, it's calculating euclidean norm matrix of matrix data [rows x cols] and vector [1 x cols].
d = ((data-vec)**2).sum(axis=1)
How can it be done? Also maybe there is another even faster method?
The problem that I use hdf5, and data matrix readed from it. For example this code gives error: objects are not aligned.
#naive numpy solution, can be parallel?
def test_bruteforce_knn():
h5f = tables.open_file(fileName)
t0= time.time()
d = np.empty((rows*batches,))
for i in range(batches):
d[i*rows:(i+1)*rows] = ((h5f.root.carray[i*rows:(i+1)*rows]-vec)**2).sum(axis=1)
print (time.time()-t0)
ndx = d.argsort()
print ndx[:k]
h5f.close()
#using some tricks (don't work error: objects are not aligned )
def test_bruteforce_knn():
h5f = tables.open_file(fileName)
t0= time.time()
d = np.empty((rows*batches,))
for i in range(batches):
d[i*rows:(i+1)*rows] = (np.einsum('ij,ij->i', h5f.root.carray[i*rows:(i+1)*rows],
h5f.root.carray[i*rows:(i+1)*rows])
+ np.dot(vec, vec)
-2 * np.dot(h5f.root.carray[i*rows:(i+1)*rows], vec))
print (time.time()-t0)
ndx = d.argsort()
print ndx[:k]
h5f.close()
Using numexpr: seems numexpr don't understand h5f.root.carray[i*rows:(i+1)*rows] it must be reassigned?
import numexpr as ne
def test_bruteforce_knn():
h5f = tables.open_file(fileName)
t0= time.time()
d = np.empty((rows*batches,))
for i in range(batches):
ne.evaluate("sum((h5f.root.carray[i*rows:(i+1)*rows] - vec) ** 2, axis=1)")
print (time.time()-t0)
ndx = d.argsort()
print ndx[:k]
h5f.close()
Upvotes: 1
Views: 843
Reputation: 363787
There's a potentially fast way (for very large arrays) using just NumPy, which is used in scikit-learn:
def squared_row_norms(X):
# From http://stackoverflow.com/q/19094441/166749
return np.einsum('ij,ij->i', X, X)
def squared_euclidean_distances(data, vec):
data2 = squared_row_norms(data)
vec2 = squared_row_norms(vec)
d = np.dot(data, vec.T).ravel()
d *= -2
d += data2
d += vec2
return d
This is based on the fact that (x - y)² = x² + y² - 2xy, even for vectors.
Test:
>>> data = np.random.randn(10, 40)
>>> vec = np.random.randn(1, 40)
>>> ((data - vec) ** 2).sum(axis=1)
array([ 96.75712686, 69.45894306, 100.71998244, 80.97797154,
84.8832107 , 82.28910021, 67.48309433, 81.94813371,
64.68162331, 77.43265692])
>>> squared_euclidean_distances(data, vec)
array([ 96.75712686, 69.45894306, 100.71998244, 80.97797154,
84.8832107 , 82.28910021, 67.48309433, 81.94813371,
64.68162331, 77.43265692])
>>> from sklearn.metrics.pairwise import euclidean_distances
>>> euclidean_distances(data, vec, squared=True).ravel()
array([ 96.75712686, 69.45894306, 100.71998244, 80.97797154,
84.8832107 , 82.28910021, 67.48309433, 81.94813371,
64.68162331, 77.43265692])
Profile:
>>> data = np.random.randn(1000, 40)
>>> vec = np.random.randn(1, 40)
>>> %timeit ((data - vec)**2).sum(axis=1)
10000 loops, best of 3: 114 us per loop
>>> %timeit squared_euclidean_distances(data, vec)
10000 loops, best of 3: 52.5 us per loop
Using numexpr is also possible, but it doesn't seem to give any speedup for 1000 points (and at 10000, it isn't much better):
>>> %timeit ne.evaluate("sum((data - vec) ** 2, axis=1)")
10000 loops, best of 3: 142 us per loop
Upvotes: 4