Reputation: 1233
I have a 2 dimensional array:
MyArray = array([6588252.24, 1933573.3, 212.79, 0, 0],
[6588253.79, 1933602.89, 212.66, 0, 0],
etc...)
The first two elements MyArray[0]
and MyArray[1]
are the X and Y coordinates of the points.
For every element in the array, I would like to find the quickest way to return its single nearest neighbor in a radius of X units. We are assuming this is in 2D space.
lets say for this example X = 6
.
I have solved the problem by comparing every element to every other element, but this takes 15 minutes or so when your list is 22k points long. We hope to eventually run this on lists of about 30million points.
I have read about K-d trees and understand the basic concept, but have had trouble understanding how to script them.
Upvotes: 33
Views: 38173
Reputation: 1904
Use sklearn.neighbors
from sklearn.neighbors import NearestNeighbors
#example dataset
coords_vect = np.vstack([np.sin(range(10)), np.cos(range(10))]).T
knn = NearestNeighbors(n_neighbors=3)
knn.fit(coords_vect)
distance_mat, neighbours_mat = knn.kneighbors(coords_vect)
The code above finds nearest neighbors in a simple example dataset of 10 points which are located on a unit circle. The following explains the nearest neighbor algorithm results for this dataset.
Results explained:
neighbours_mat = array([[0, 6, 7],
[1, 7, 8],
[2, 8, 9],
[3, 9, 4],
[4, 3, 5],
[5, 6, 4],
[6, 0, 5],
[7, 1, 0],
[8, 2, 1],
[9, 3, 2]], dtype=int64)
The values of the result matrix, neighbours_mat
, are indices of the elements (rows) in the input vector, coords_vect
. In our example, reading the first row of the result neighbours_mat
, the point in index 0 of coords_vect
is closest to itself (index 0) then to the point in the index 6 of coords_vect
, and then to the point in index 7 -> this can be verified with "The coordinates of the input vector coords_vect
" plot below. The second raw in the result neighbours_mat
indicates that the point in index 1 is closest to itself, then to the point in index 7, then to the point in index 8, and so on.
Notes: The first column in the result neighbours_mat
is the node we measure distances from, the second column is its nearest neighbor the third column is the second nearest neighbor. You can get more neighbours by increasing n_neighbors
@ NearestNeighbors(n_neighbors=3)
initialization. The distance_mat
are the distances of each node from its neighbors, notice that every node has distance of 0 to itself, therefore the first column is always zeros:
distance_mat = array([[0. , 0.28224002, 0.70156646],
[0. , 0.28224002, 0.70156646],
[0. , 0.28224002, 0.70156646],
[0. , 0.28224002, 0.95885108],
[0. , 0.95885108, 0.95885108],
[0. , 0.95885108, 0.95885108],
[0. , 0.28224002, 0.95885108],
[0. , 0.28224002, 0.70156646],
[0. , 0.28224002, 0.70156646],
[0. , 0.28224002, 0.70156646]])
Plotting the points:
x_vect, y_vect = np.sin(range(10)), np.cos(range(10))
plt.figure()
plt.scatter(x_vect, y_vect)
for x,y, label in zip (x_vect, y_vect, range(number_of_points)):
plt.text(x,y,label)
The coordinates of the input vector coords_vect
:
Upvotes: 2
Reputation: 1233
Thanks to John Vinyard for suggesting scipy. After some good research and testing, here is the solution to this question:
Prerequisites: Install Numpy and SciPy
Import the SciPy and Numpy Modules
Make a copy of the 5 dimensional array including just the X and Y values.
Create an instance of a cKDTree
as such:
YourTreeName = scipy.spatial.cKDTree(YourArray, leafsize=100)
#Play with the leafsize to get the fastest result for your dataset
Query the cKDTree
for the Nearest Neighbor within 6 units as such:
for item in YourArray:
TheResult = YourTreeName.query(item, k=1, distance_upper_bound=6)
for each item in YourArray
, TheResult
will be a tuple of the distance between the two points, and the index of the location of the point in YourArray
.
Upvotes: 39