-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Numerical derivative for vector valued constraints #104
base: master
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -35,6 +35,8 @@ | |
|
||
import cyipopt | ||
|
||
from scipy.optimize._numdiff import approx_derivative | ||
|
||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Leave this blank line here. PEP8 convention is two blank lines between imports and other code. |
||
class IpoptProblemWrapper(object): | ||
"""Class used to map an scipy minimize definition to a cyipopt problem. | ||
|
@@ -99,7 +101,7 @@ def __init__(self, fun, args=(), kwargs=None, jac=None, hess=None, | |
con_args = con.get('args', []) | ||
con_kwargs = con.get('kwargs', []) | ||
if con_jac is None: | ||
con_jac = lambda x0, *args, **kwargs: approx_fprime(x0, con_fun, eps, *args, **kwargs) | ||
con_jac = lambda x, *args, **kwargs: approx_derivative(con_fun, x, method='2-point', args=args, kwargs=kwargs) | ||
self._constraint_funs.append(con_fun) | ||
self._constraint_jacs.append(con_jac) | ||
self._constraint_args.append(con_args) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
import numpy as np | ||
from scipy.optimize import rosen, rosen_der | ||
from cyipopt import minimize_ipopt | ||
|
||
x0 = np.array([0.5, 0]) | ||
|
||
bounds = [np.array([0, 1]), np.array([-0.5, 2.0])] | ||
|
||
|
||
eq_cons = {'type': 'eq', | ||
'fun' : lambda x: np.array([2*x[0] + x[1] - 1, x[0]**2 - 0.1]) | ||
} | ||
|
||
res = minimize_ipopt(rosen, x0, jac=rosen_der, bounds=bounds, constraints=[eq_cons]) | ||
|
||
print(res) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It would be ideal to add a unit test that checks this example, or a similar one. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Agreed. See here for how we've written other tests that an scipy-optional. The test can simply be a copy (with the small required changes) of any of the three tests in that module currently marked with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I worry a little if this function should be used, as
_numdiff
is not a "public" module.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @moorepants here in principle. Although using finite differencing to get an accurate derivative approximation is complex as an error analysis is required to chose an optimal stepsize that balances truncation and subtractive cancellation. Tapping in to
scipy.optimize
does seem like the best way to do it without otherwise reimplementing ourselves or introducing an additional dependency.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are sevaral problems I encountered. As can be written here, scipy decided to use
approx_derivative
instead ofapprox_fprime
.This has several disadvantages I want to discuss. As you see from the failed pipeline this is not the best function for approximating a numerical derivative. When testing it on my system I had installed scipy==1.4.1. In this version the change was no incorporated. When upgrading to scipy==1.6.1 the tests are failing too.
Led by these observation I've used the implementation given in optpy. If you are willing to review that I will make a new pull request.
But we have to be aware that the results of the test cases are depending on the actual scipy version when no jacobian function is given!