Reputation: 2234
I am running an optimization with scipy.optimize.minimize
sig_init = 2
b_init = np.array([0.2,0.01,0.5,-0.02])
params_init = np.array([b_init, sig_init])
mle_args = (y,x)
results = opt.minimize(crit, params_init, args=(mle_args))
The problem is, I need to set a bound on sig_init
. But the opt.minimize()
requires that I specify bounds for each of the input parameters. But one of my inputs is a numpy array.
How can I specify the bounds given that one of my inputs is a numpy array?
Upvotes: 3
Views: 4539
Reputation: 91
First of all, scipy.optimize.minimize expects a flat array as its second argument x0 (documentation) (which means the function it optimizes also takes a flat array and optional additional arguments). Therefore it is my understanding you would have to give it something like :
b_init = [0.2,0.01,0.5,-0.02]
sig_init = [2]
params_init = np.array(b_init + sig_init])
for the optimization to work. Then, if you will have to give the bounds for each scalar in you array. One rudimentary example if you wanted [-1, 1] bounds on sig and didn't want bounds on b :
bounds = [(-np.inf, np.inf) for _ in b_init] + [(-1, 1)]
Upvotes: 1