user1380653
user1380653

Reputation: 63

Multithreaded calls to the objective function of scipy.optimize.leastsq

I'm using scipy.optimize.leastsq in conjunction with a simulator. leastsq calls a user-defined objective function and passes an input vector to it. In turn, the objective function returns an error vector. leastsq optimizes the input vector in such a way that the sum of the squares of the error vector is minimized.

In my case the objective function will run a whole simulation each time it is called. The employed simulator is single-threaded and needs several minutes for each run. I'd therefore like to run multiple instances of the simulator at once. However, calls to the objective function are performed serially.

How can I get leastsq to perform multiple calls to the objective function at once?

Upvotes: 5

Views: 10063

Answers (5)

Oren
Oren

Reputation: 5309

Have you used scipy.least_squares, it is a much better option, and when I use it to optimize a function it uses all the available threads. Therefore exactly what you asked

Upvotes: -2

xioxox
xioxox

Reputation: 2670

There's a good opportunity to speed up leastsq by supplying your own function to calculate the derivatives (the Dfun parameter), providing you have several parameters. If this function is not supplied, leastsq iterates over each of the parameters to calculate the derivative each time, which is time consuming. This appears to take the majority of the time in the fitting.

You can use your own Dfun function which calculates the derivatives for each parameter using a multiprocessing.Pool to do the work. These derivatives can be calculated independently and should be trivially parallelised.

Here is a rough example, showing how to do this:

import numpy as np
import multiprocessing
import scipy.optimize

def calcmod(params):
    """Return the model."""
    return func(params)

def delta(params):
    """Difference between model and data."""
    return calcmod(params) - y

pool = multiprocessing.Pool(4)

def Dfun(params):
    """Calculate derivatives for each parameter using pool."""
    zeropred = calcmod(params)

    derivparams = []
    delta = 1e-4
    for i in range(len(params)):
        copy = np.array(params)
        copy[i] += delta
        derivparams.append(copy)

    results = pool.map(calcmod, derivparams)
    derivs = [ (r - zeropred)/delta for r in results ]
    return derivs

retn = scipy.optimize.leastsq(leastfuncall, inputparams, gtol=0.01,
                              Dfun=Dfun, col_deriv=1)

Upvotes: 10

pv.
pv.

Reputation: 35125

The algorithm used by leastsq, Levenberg-Marquardt, needs to know the value of the objective function at the current point before determining the next point. In short, there is no straightforward way to parallelize such a serial algorithm.

You can, however, parallelize your objective function in some cases. This can be done, if it's of the form:

def objective_f(params):
    r = np.zeros([200], float)
    for j in range(200):
        r[j] = run_simulation(j, params)
    return

def run_simulation(j, params):
    r1 = ... compute j-th entry of the result ...
    return r1

Here, you can clearly parallelize across the loop over j, for instance using the multiprocessing module. Something like this: (untested)

def objective_f(params):
    r = np.zeros([200], float)
    def parameters():
        for j in range(200):
            yield j, params
    pool = multiprocessing.Pool()
    r[:] = pool.map(run_simulation, parameters())
    return r

Another opportunity for parallelization occurs if you have to fit multiple data sets --- this is an (embarassingly) parallel problem, and the different data sets can be fitted in parallel.

If this does not help, you can look into discussion on parallelization of the LM algorithm in the literature. For instance: http://dl.acm.org/citation.cfm?id=1542338 The main optimization suggested in this paper seems to be parallelization of the numerical computation of the Jacobian. You can do this by supplying your own parallelized Jacobian function to leastsq. The remaining suggestion of the paper, speculatively parallelizing Levenberg-Marquardt search steps, is however more difficult to implement and requires changes in the LM algorithm.

I'm not aware of Python (or other language) libraries implementing optimization algorithms targeted for parallel computation, although there may be some. If you manage to implement/find one of them, please advertise this on the Scipy users mailing list --- there is certainly interest in one of these!

Upvotes: 8

ktdrv
ktdrv

Reputation: 3673

NumPy/SciPy's functions are usually optimized for multithreading. Did you look at your CPU utilization to confirm that only one core is being used while the simulation is being ran? Otherwise you have nothing to gain from running multiple instances.

If it is, in fact, single threaded, then your best option is to employ the multiprocessing module. It runs several instances of the Python interpreter so you can make several simultaneous calls to SciPy.

Upvotes: -2

user574435
user574435

Reputation: 143

Does this help? http://docs.python.org/library/multiprocessing.html

I've always found Pool to be the simplest to multiprocess with python.

Upvotes: -2

Related Questions