jpmuc
jpmuc

Reputation: 1154

Formulate residual for Levenberg-Marquart

I want to minimize a cost function with the form,

cost

with the Levenberg-Marquart method with the scipy.optimize.least_squares function. But I do not see how to formulate it in terms of residuals, so that I can use such method. Otherwise I get the error message "Method 'lm' doesn't work when the number of residuals is less than the number of variables."

My cost function is defined as follows:

def canonical_cost(qv, t, A, B, C, delta, epsilon, lam):
    assert(type(qv) is np.ndarray and len(qv) == 4)
    # assert(type(t) is np.ndarray and len(t) == 3)

    q = Quaternion(*qv)
    qv, tv = qv.reshape(-1, 1), np.vstack(([0], t.reshape(-1, 1)))

    f1 = qv.T @ (A + B) @ qv
    f2 = tv.T @ C @ tv + delta @ tv + epsilon @ (q.Q.T @ q.W) @ tv
    qnorm = (1 - qv.T @ qv)**2
    return np.squeeze(f1 + f2 + lam*qnorm)

And I try to optimize with,

def cost(x):
    qv, t = x[:4], x[4:]
    return canonical_cost(qv, t, A, B, C, delta, epsilon, lam)

result = opt.least_squares(cost, initial_conditions, method='lm',
                               **kwargs)

Thank you

Upvotes: 0

Views: 3182

Answers (1)

user3126996
user3126996

Reputation: 19

As per my understanding, the LM algorithm performs sum of squares of the residual vector and tries to minimize that value. We need to return a vector accordingly so that the sum of squares of the elements in that vector is minimized. And the requirement of the size of this residual vector being more than the number of variables makes sense because it basically implies that number of unkno

Upvotes: 0

Related Questions