python - scipy: minimize vs. minimize.scalar; return F versus return F**2; shouldnt make a difference? -
i found behavior cannot explain. miss ?
i have implicit function:
def my_cost_fun(x,a,b,c): # x scalar; variables provided numpy arrays f = some_fun(x,a,b,c) - x return f i minimize function using:
optimize.fsolve(my_cost_fun,0.2,args = (a,b,c))optimize.brentq(my_cost_fun,-0.2,0.2,args = (a,b,c))
or mimimize function:
optimize.minimize(my_cost_fun,0.2,args = (a,b,c),method = 'l-bfgs-b',bounds=((0,a),)
the strange thing is:
if use
return f- brent_q method , fsolve give same result ,
%timeitmeasures fastest loop ~ 250 µs - l-bfgs-b (and slsqp , tnc) not change
x0@ , provides wrong result
- brent_q method , fsolve give same result ,
if use
return f**2:fsolve returns right solution converges slowly; 1.2 ms fastest loop
l-bfgs-b returns right solution converges slowly: 1.5 ms fastest loop
can explain why ?
as mentioned in comments:
here 1 possible explaination of why l-bfgs-b not working when use return f: if value of f can negative, optmize.minimize try find negative value can. minimize isn't finding root, it's finding minimum. if return f**2 instead, since real-valued functions f**2 positive, minima of f**2 happen @ f=0, i.e. minima roots.
this doesn't explain timing issue, may of secondary concern. still curious study timing particular some_fun(x,a,b,c) if chance post definition.
Comments
Post a Comment