Reputation: 2332
Given a 2D point p
, I'm trying to calculate the smallest distance between that point and a functional curve, i.e., find the point on the curve which gives me the smallest distance to p
, and then calculate that distance. The example function that I'm using is
f(x) = 2*sin(x)
My distance function for the distance between some point p
and a provided function is
def dist(p, x, func):
x = np.append(x, func(x))
return sum([[i - j]**2 for i,j in zip(x,p)])
It takes as input, the point p
, a position x
on the function, and the function handle func
. Note this is a squared Euclidean distance (since minimizing in Euclidean space is the same as minimizing in squared Euclidean space).
The crucial part of this is that I want to be able to provide bounds for my function so really I'm finding the closest distance to a function segment. For this example my bounds are
bounds = [0, 2*np.pi]
I'm using the scipy.optimize.minimize
function to minimize my distance function, using the bounds. A result of the above process is shown in the graph below.
This is a contour plot showing distance from the sin function. Notice how there appears to be a discontinuity in the contours. For convenience, I've plotted a few points around that discontinuity and the "closet" points on the curve that they map to.
What's actually happening here is that the scipy function is finding a local minimum (given some initial guess), but not a global one and that is causing the discontinuity. I know finding the global minimum of any function is impossible, but I'm looking for a more reliable way to find the global minimum.
Possible methods for finding a global minimum would be
Any suggestions about the best way to go about this, or possibly directions to useful functions that may tackle this problem would be great!
Upvotes: 3
Views: 4401
Reputation: 5521
As suggest in a comment, you could try a global optimization algorithm such as scipy.optimize.differential_evolution
. However, in this case, where you have a well-defined and analytically tractable objective function, you could employ a semi-analytical approach, taking advantage of the first-order necessary conditions for a minimum.
In the following, the first function is the distance metric and the second function is (the numerator of) its derivative w.r.t. x
, that should be zero if a minimum occurs at some 0<x<2*np.pi
.
import numpy as np
def d(x, p):
return np.sum((p-np.array([x,2*np.sin(x)]))**2)
def diff_d(x, p):
return -2 * p[0] + 2 * x - 4 * p[1] * np.cos(x) + 4 * np.sin(2*x)
Now, given a point p
, the only potential minimizers of d(x,p)
are the roots of diff_d(x,p)
(if any), as well as the boundary points x=0
and x=2*np.pi
. It turns out that diff_d
may have more than one root. Noting that the derivative is a continuous function, the pychebfun library offers a very efficient method for finding all the roots, avoiding cumbersome approaches based on the scipy
root-finding algorithms.
The following function provides the minimum of d(x, p)
for a given point p
:
import pychebfun
def min_dist(p):
f_cheb = pychebfun.Chebfun.from_function(lambda x: diff_d(x, p), domain = (0,2*np.pi))
potential_minimizers = np.r_[0, f_cheb.roots(), 2*np.pi]
return np.min([d(x, p) for x in potential_minimizers])
Here is the result:
Upvotes: 7