Marcus R Marincek
Marcus R Marincek

Reputation: 11

Python and Scipy Optimization implementation

I wrote a code and I need some help about implementing an optmization method, maybe with scipy. If you can note below I have a variable "pD" that I need to vary in order to find a result for "abs(mFmin[i][j] - mReg[i][j]) > 1". mFmin, mReg and all the other calculations inside this while depends on "pD"

I wrote a poor code, just for testing, in order to vary "pD" comparing mFmin and mReg, but it's too slow and doesn't matter if I raise the error or not, that little code sucks.

I'm searching some optmization code in scipy library, but I can't see where I can implement this with my code. I think it's simples solving, but I have nobody to exchange ideas...

Note: pD is a matrix

Below, I attached the main part of the code:

for i in range(0,int(x)):
    for j in range(0,n):
        curso[i] = i*passo
        curso[0] = 0
        pD[i][j] = pZref
        mFmin[i][j] = 0
        mReg[i][j] = gref

# my doubt starts here
        while abs(mFmin[i][j] - mReg[i][j]) > 1:
            if mFmin[i][j] < mReg[i][j]:
                pD[i][j] = pD[i][j] + 0.0001
            else:
                pD[i][j] = pD[i][j] - 0.0001

            pZaux[i][j] = pE_*sqrt((pow(pZref/pE_,2)-pow(pA/pE_,2))*pow(mFmin[i][j]/gref,2)+pow(pA/pE_,2))

            vD[i][j] = pE_*vE_/pD[i][j]
            if pD[i][j]/pE_ > RPcr:
                psiR[i][j] = sqrt(pow(pD[i][j]/pE_,2/kappa)-pow(pD[i][j]/pE_,(kappa+1)/kappa))
            else:
                psiR[i][j] = psicrR
            if pZaux[i][j]/pD[i][j] > RPcr:
                psiF[i][j] = sqrt((2*kappa/(kappa-1))*(pow(pZaux[i][j]/pD[i][j],2/kappa)-pow(pZaux[i][j]/pD[i][j],(kappa+1)/kappa)))
            else:
                psiF[i][j] = psicrF

            mFmin[i][j] = 3600*psiF[i][j]*kmin*(fmin[j]/1000000)*sqrt(pD[i][j]*100000/vD[i][j])
            mReg[i][j] = 3600*psiR[i][j]*alpha*(fV[i][j]/1000000)*sqrt(2*kappa/(kappa-1)*(pE_*100000/vE_))

Thanks for reading!

MRM

Upvotes: 0

Views: 384

Answers (3)

Ramon Crehuet
Ramon Crehuet

Reputation: 4017

I'd recommend importing numpy with:

import numpy as np

because your error it seems as your sqrt comes from the math module, and not the numpy module. This could be the cause of the error. See TypeError: only length-1 arrays can be converted to Python scalars while trying to exponentially fit data

Some more recommendations:

  • There is no need to specify 0 in slice ranges: psiR[0:,0:]==psiR[:,:]==psiR
  • create x0 with x0 = np.ones(n) * pZref if pZref is a scalar.
  • You should read some numpy manual to get familiar with these constructions.

Upvotes: 0

Ramon Crehuet
Ramon Crehuet

Reputation: 4017

Andrey Sobolev suggests to use numpy because your if pD was a numpy array you would access its elements as pD[i,j] not pD[i][j], which is faster and simpler.

If I got it right, each [i,j] optimisation is independent of the other values of the array, right? Then, you just have to perform i*j optimisations (if the results were coupled, things would get more complicated...)

This problem may be too slow for 2 reasons. Because the optimizations slow, or because i and j are very large. In the later case you can probably accelerate it a lot using Numba, as the for loops are slow in Python.

As already suggested, use some method in scipy.optimize to do each of the optimizations. Be careful with the initial guess, specially if the function has more than one minimum.

Upvotes: 0

Balint Domokos
Balint Domokos

Reputation: 1021

You could take a look at the optimize package: http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html

As a simple example let's say you are looking for the minimum of (x-3)**2. You define a function, which gets the input, and returns the function value. You pass this function to minimize, along with an initial guess x0.

import numpy as np
from scipy.optimize import minimize
def fn(x):
    return (3-x)**2
x0 = 0
res = minimize(fn, x0, method='nelder-mead', options={ 'disp': True})

This returns 3.0 as expected. You can define fn to have an input vector x, and then specify x0 as a vector the initial point with same dimension as expected by fn.

In the example 'nelder-mead' method is a simple algorithm which might have a long running time. If you know the grandient or your function to be minimized, you can use more advanced algorithms e.g. BFGS, and pass the gradient function as well, as described in the doc.

Upvotes: 3

Related Questions