abbasly
abbasly

Reputation: 31

Deriving an adjacency matrix wrt distance

Let's suppose that I have n points, and a square numpy matrix where i,j'th entry is filled with the distance between the point i and point j. How can I derive an adjacency matrix in a way that, there is an "edge" between point a and point b if for every other point c, max(dist(c,a), dist(c,b)) > dist(a,b), in other words there is not any other point c such as c is closer to a and b than, a and b are to each other. I could write this in numpy with some for loops, but I wonder if there is any easier/faster way to do so.

Upvotes: 1

Views: 282

Answers (1)

Frank Yellin
Frank Yellin

Reputation: 11297

This is really hard to do without a concrete example, so I may be getting this slightly wrong.

So you have an nxn matrix (presumably symmetric with a diagonal of 0) representing the distances. Let's call this matrix A.

Then A[:,None,:] is an nx1xn matrix such that if you broadcast it to nxnxn, then A[i, j, k] is the distance from the i'th point to the k'th point. Likewise, A[None, :, :], when broadcast to nxnxn, gives a matrix such that A[i, j, k] gives the distance from the j'th point to the kth point.

So B = np.maximum(A[:,None,:],A[None,:,:]) is an array such at b[i, j, k] is the maximum of the distance from i to k or from j to k. Take the minimum of B along the last dimension, and you've got the value for the best possible k. Edges are those values for which np.min(B, axis=2) <= A.

Again, try this out on your own computer. I may have slight details wrong.

Upvotes: 1

Related Questions