-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Numerical derivative for vector valued constraints #104
base: master
Are you sure you want to change the base?
Conversation
…ed constraint functions without numerical derivative
|
||
res = minimize_ipopt(rosen, x0, jac=rosen_der, bounds=bounds, constraints=[eq_cons]) | ||
|
||
print(res) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be ideal to add a unit test that checks this example, or a similar one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. See here for how we've written other tests that an scipy-optional. The test can simply be a copy (with the small required changes) of any of the three tests in that module currently marked with @pytest.mark.skipif("scipy" not in sys.module)
.
cyipopt/scipy_interface.py
Outdated
@@ -35,6 +35,8 @@ | |||
|
|||
import cyipopt | |||
|
|||
from scipy.optimize._numdiff import approx_derivative |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I worry a little if this function should be used, as _numdiff
is not a "public" module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @moorepants here in principle. Although using finite differencing to get an accurate derivative approximation is complex as an error analysis is required to chose an optimal stepsize that balances truncation and subtractive cancellation. Tapping in to scipy.optimize
does seem like the best way to do it without otherwise reimplementing ourselves or introducing an additional dependency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are sevaral problems I encountered. As can be written here, scipy decided to use approx_derivative
instead of approx_fprime
.
This has several disadvantages I want to discuss. As you see from the failed pipeline this is not the best function for approximating a numerical derivative. When testing it on my system I had installed scipy==1.4.1. In this version the change was no incorporated. When upgrading to scipy==1.6.1 the tests are failing too.
Led by these observation I've used the implementation given in optpy. If you are willing to review that I will make a new pull request.
But we have to be aware that the results of the test cases are depending on the actual scipy version when no jacobian function is given!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a great feature and contribution, thanks. My opinion is that if we are going to supply a numerical approximation on the user's behalf, then we need to do a good job of making sure that it is decently accurate. From my perspective that means going beyond just using a naive finite differencing scheme. Just my opinion though, so a joint consensus from all contributors would be good here.
cyipopt/scipy_interface.py
Outdated
@@ -35,7 +35,6 @@ | |||
|
|||
import cyipopt | |||
|
|||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leave this blank line here. PEP8 convention is two blank lines between imports and other code.
@@ -63,10 +63,28 @@ def test_minimize_ipopt_nojac_constraints_if_scipy(): | |||
x0 = [1.3, 0.7, 0.8, 1.9, 1.2] | |||
constr = {"fun": lambda x: rosen(x) - 1.0, "type": "ineq"} | |||
res = cyipopt.minimize_ipopt(rosen, x0, constraints=constr) | |||
print(res) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't have print statements within tests.
assert isinstance(res, dict) | ||
assert np.isclose(res.get("fun"), 1.0) | ||
assert res.get("status") == 0 | ||
assert res.get("success") is True | ||
expected_res = np.array([1.001867, 0.99434067, 1.05070075, 1.17906312, | ||
1.38103001]) | ||
np.testing.assert_allclose(res.get("x"), expected_res) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two blank lines between functions as per PEP8 convention.
assert isinstance(res, dict) | ||
assert res.get("status") == 0 | ||
assert res.get("success") is True | ||
np.testing.assert_allclose(res.get("x"), expected_res) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
np.testing.assert_allclose(res.get("x"), expected_res) | |
np.testing.assert_allclose(res.get("x"), expected_res) | |
Blank line at end of file.
x0 = np.array([0.5, 0.75]) | ||
bounds = [np.array([0, 1]), np.array([-0.5, 2.0])] | ||
expected_res = 0.25 * np.ones_like(x0) | ||
eq_cons = {'fun' : lambda x: x - expected_res, 'type': 'eq'} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eq_cons = {'fun' : lambda x: x - expected_res, 'type': 'eq'} | |
eq_cons = {"fun": lambda x: x - expected_res, "type": "eq"} |
Reformat to align with rest of module.
cyipopt/utils.py
Outdated
@@ -1,13 +1,13 @@ | |||
"""Module with utilities for use within CyIpopt. | |||
|
|||
Currently contains functions to aid with deprecation within CyIpopt. | |||
Currently contains functions to aid with deprecation within CyIpopt and | |||
comoutation of numerical Jacobians. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comoutation of numerical Jacobians. | |
computation of numerical Jacobians. |
Fix typo.
cyipopt/utils.py
Outdated
jac = np.zeros([len(x0), len(np.atleast_1d(results[0]))]) | ||
for i in range(len(x0)): | ||
jac[i] = (results[i + 1] - results[0]) / self.epsilon | ||
return jac.transpose() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return jac.transpose() | |
return jac.transpose() | |
Blank line at end of file.
cyipopt/utils.py
Outdated
if not key in self.value_cache: | ||
value = self._func(x, *args, **kwargs) | ||
if np.any(np.isnan(value)): | ||
print("Warning! nan function value encountered at {0}".format(x)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally should raise a warning rather than just printing a warning-related message.
Is there an advantage of using |
It is possible that scipy has some options for numerical derivatives that Ipopt doesn't, which could be exposed. Other than that we'd just need to make sure the derivative estimate setting for ipopt gets set (if that is necessary). Either option is fine in my opinion, as long as it works and is tested. |
Added a function call to scipy's numerical derivative for vector valued constraint functions without numerical derivative