You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
biteopt is an optimizer i found out about very recently only after digging through the source code for Tom7's lowestcase/uppestcase project. i may not have found it otherwise, as my search terms have never brought it up, and it's even absent from your pretty comprehensive list on the humpday blogpost. anyway, there's quite a hefty readme if you'd like to read about it.
in some preliminary experimentation on humpday.objectives.allobjectives, it seems to perform so-so with a small number of evaluations (n=80) and pretty well with a large number (n=550). roughly speaking, it ranks worse than dlib and similar to bobyqa for short and long runs respectively. i suspect the poor performance with fewer evals has to do with the highly probabilistic nature of the method, but that's just a guess as i'm still learning about the method.
it has a (slightly outdated) barebones python interface of the same name on pypi (source) and can be invoked as such:
the interface does not return the optimal function value, so either tracking it within an objective wrapper or performing an additional eval will be necessary. the number of attempts defaults to 10, but this puts it at a significant disadvantage to the other optimizers in humpday for <1000 evals — i'd try 1 or 2. the depth could be set to 1 or 4, but 1 seems to be performing better for me.
even if you decide to pass on adding biteopt to humpday, i'd like to hear any thoughts you have on it!
The text was updated successfully, but these errors were encountered:
biteopt is an optimizer i found out about very recently only after digging through the source code for Tom7's lowestcase/uppestcase project. i may not have found it otherwise, as my search terms have never brought it up, and it's even absent from your pretty comprehensive list on the humpday blogpost. anyway, there's quite a hefty readme if you'd like to read about it.
in some preliminary experimentation on
humpday.objectives.allobjectives
, it seems to perform so-so with a small number of evaluations (n=80) and pretty well with a large number (n=550). roughly speaking, it ranks worse than dlib and similar to bobyqa for short and long runs respectively. i suspect the poor performance with fewer evals has to do with the highly probabilistic nature of the method, but that's just a guess as i'm still learning about the method.it has a (slightly outdated) barebones python interface of the same name on pypi (source) and can be invoked as such:
the interface does not return the optimal function value, so either tracking it within an objective wrapper or performing an additional eval will be necessary. the number of attempts defaults to 10, but this puts it at a significant disadvantage to the other optimizers in humpday for <1000 evals — i'd try 1 or 2. the depth could be set to 1 or 4, but 1 seems to be performing better for me.
even if you decide to pass on adding biteopt to humpday, i'd like to hear any thoughts you have on it!
The text was updated successfully, but these errors were encountered: