makansij
makansij

Reputation: 9865

How to use `scipy.optimize.leastsq` to optimize in the joint least squares direction?

I want to be able to move along a gradient in the joint least squares direction.

I thought I could do this using scipy.optimize.leastsq (http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.leastsq.html). (Perhaps I'm wrong, maybe there's an easier way to do this?).

I'm having difficulty understanding what to use, and how to move in the joint least squares direction, while still increasing the parameters.

What I need to do is input something like this:

[1,0]

And, have it move along the least squares direction, which would mean, increasing either or both values 1 and 0, but doing so such that the sum of the squared values is as small as possible.

This would mean [1,0] would increase to [1, <something barely greater than 0>], and eventually would reach [1,1]. At which point, both 1's would increase at the same rate.

How would I program this? It seems to me like scipy.optimize.leastsq would be of use here, but I cannot figure out how to use it?

Thankyou.

Upvotes: 0

Views: 308

Answers (1)

Ramon Crehuet
Ramon Crehuet

Reputation: 3997

I don't think you need scipy.optimize.leastsq because your problem can be solved analytically. At any moment, the gradient of the fuction np.sum(x) where x is an array, is 2*x. So, if you want to obtain the smallest increase, then you have to increase the smallest component of the gradient, which you can find with np.argmin. Here is a simple solution:

def g(x):
    return np.array(2*x)

x = np.array([1.,0.])
for _ in range(200):
    eps = np.zeros_like(x)
    index = np.argmin(g(x))
    eps[index] = 0.01 #or whatever
    x += eps
    print(x)

When multiple indices have the same value, np.argmin returns the first occurrence, so you will encounter certain oscillations that you can minimize reducing eps

Upvotes: 1

Related Questions