-
-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Objectives RoI #1172
Comments
Also, would it be possible to specify some objective weights to ParEGO, to tell it which objective one cares more about? I.e., in the example above, we might care twice as much about accuracy than inference speed, and it would be cool to order incumbents and to do sampling according to this weighting scheme. So, after normalization (which I think involves dividing by std, if I remember well), applying some supplementary weights would be cool. Not sure if Hypermapper's LE: Would it work if, e.g., accuracy was just reported twice? |
Hi Bogdan Budescu, Regarding both of the raised ideas, Best, |
Hi Lukas, Thanks for your reply, and sorry for my belated response.
I know you're a small research team, and I understand. You guys are doing a hell of a job. Your hard work is greatly appreciated, and I imagine your priorities are probably more in line with publishing papers about new algos rather than maintaining this library more than you need for your research.
I've seen we had good collaboration with PR automl/ConfigSpace#397 for Issue automl/ConfigSpace#396.
Not sure exactly when I will get a budget for it, but I would absolutely love to contribute. Currently, however, I'm involved with PR #1178 for Issue #1170, and would absolutely need a bit of support with that, because I can see things have been stalling for a while. Once that's done, I can then move on to something else, perhaps this Objective ROI Task or #1169. The status on that is that I launched a new 48h session on 64 CPUs, similar to the one from which I collected the CPU load data used to generate the plot in the description of the issue. Once that is done, we can see whether the code I wrote actually improves CPU load (and, hopefully, the costs of the final pareto optimal configs for my particular use case in the given time budget). But actually we don't need for my session to finish. Perhaps we can think of a better way to test it. Thanks, Bogdan |
It would be useful to be able to specify to the optimizer a bounding box that defines a region of the objective space outside of which you don't care about results.
E.g., if you try to find optimal metaparameters for a neural net, optimizing a tradeoff between accuracy and inference time, it would be cool to tell the optimizer to avoid wasting time on searching networks that have an inference time above one second, no matter how good their accuracy might be, and neither on networks with accuracy below 50%, no matter how fast they are.
Hypermapper has this feature: example, api docs - check out the params
scalarization_method
,weight_sampling
,bounding_box_limits
.I'm not intimately familiar with the method, but, as far as I can understand from a quick skim, it appears they seem to achieve this by modifying the random sampling used during ParEGO. I think their implementation is based on this paper.
LE: I think the original implementation, as per the paper linked above, was made in dragonfly
The text was updated successfully, but these errors were encountered: