You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(I don't believe this specifically has been asked for before, although there have been issues about handling long benchmarks before.)
The feature
An option for BenchmarkGroup, like BenchmarkGroup::hard_time_limit(d: Duration), which if set limits the total expected execution time of individual benchmarks to about d; no output or analysis should be done if the required number of samples is not reached within this time.
Motivation
I sometimes benchmark different algorithms for a task on a wide range of input sizes; the algorithms have different asymptotic time complexities in practice (e.g.: O(n^2) vs. O(n (log n)^2) vs. O(n) ). The algorithms that perform well on large inputs may be slow on small inputs, while the best algorithms for small inputs may scale terribly for large inputs -- were I to run them, they might take ~100x longer than the others on large inputs. It would be nice if I had a way to automatically skip benchmarks that take too long.
Also: I sometimes run benchmarks on different computers, leading to (depending on what the target architecture is and what CPU features are available) factor 10 performance differences; benchmark size ranges tuned for one computer may not work well for another.
Alternatives
Manually tune the range of input sizes on which benchmarks are run, updating the range whenever there are major algorithm performance changes. (I am currently doing this.)
Write my own semi-custom benchmark controller that integrates with criterion. (Might do this eventually.)
Possible implementation ideas:
Run each individual benchmark in its own process, and kill it if it runs over time.
Set a time limit, and after each test iteration check if the current time has exceeded the limit. (Or: check if the extrapolated completion time will exceed the limit.)
Ideally there would be some way to indicate a "timeout" result for a benchmark in the plots and .json output files.
The text was updated successfully, but these errors were encountered:
(I don't believe this specifically has been asked for before, although there have been issues about handling long benchmarks before.)
The feature
An option for
BenchmarkGroup
, likeBenchmarkGroup::hard_time_limit(d: Duration)
, which if set limits the total expected execution time of individual benchmarks to aboutd
; no output or analysis should be done if the required number of samples is not reached within this time.Motivation
I sometimes benchmark different algorithms for a task on a wide range of input sizes; the algorithms have different asymptotic time complexities in practice (e.g.: O(n^2) vs. O(n (log n)^2) vs. O(n) ). The algorithms that perform well on large inputs may be slow on small inputs, while the best algorithms for small inputs may scale terribly for large inputs -- were I to run them, they might take ~100x longer than the others on large inputs. It would be nice if I had a way to automatically skip benchmarks that take too long.
Also: I sometimes run benchmarks on different computers, leading to (depending on what the target architecture is and what CPU features are available) factor 10 performance differences; benchmark size ranges tuned for one computer may not work well for another.
Alternatives
Manually tune the range of input sizes on which benchmarks are run, updating the range whenever there are major algorithm performance changes. (I am currently doing this.)
Write my own semi-custom benchmark controller that integrates with
criterion
. (Might do this eventually.)Possible implementation ideas:
Ideally there would be some way to indicate a "timeout" result for a benchmark in the plots and
.json
output files.The text was updated successfully, but these errors were encountered: