Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: hard timeouts / automatically skipping very slow benchmarks #838

Open
mstoeckl opened this issue Feb 16, 2025 · 0 comments

Comments

@mstoeckl
Copy link

mstoeckl commented Feb 16, 2025

(I don't believe this specifically has been asked for before, although there have been issues about handling long benchmarks before.)

The feature

An option for BenchmarkGroup, like BenchmarkGroup::hard_time_limit(d: Duration), which if set limits the total expected execution time of individual benchmarks to about d; no output or analysis should be done if the required number of samples is not reached within this time.

Motivation

I sometimes benchmark different algorithms for a task on a wide range of input sizes; the algorithms have different asymptotic time complexities in practice (e.g.: O(n^2) vs. O(n (log n)^2) vs. O(n) ). The algorithms that perform well on large inputs may be slow on small inputs, while the best algorithms for small inputs may scale terribly for large inputs -- were I to run them, they might take ~100x longer than the others on large inputs. It would be nice if I had a way to automatically skip benchmarks that take too long.

Also: I sometimes run benchmarks on different computers, leading to (depending on what the target architecture is and what CPU features are available) factor 10 performance differences; benchmark size ranges tuned for one computer may not work well for another.

Alternatives

Manually tune the range of input sizes on which benchmarks are run, updating the range whenever there are major algorithm performance changes. (I am currently doing this.)

Write my own semi-custom benchmark controller that integrates with criterion. (Might do this eventually.)

Possible implementation ideas:

  • Run each individual benchmark in its own process, and kill it if it runs over time.
  • Set a time limit, and after each test iteration check if the current time has exceeded the limit. (Or: check if the extrapolated completion time will exceed the limit.)

Ideally there would be some way to indicate a "timeout" result for a benchmark in the plots and .json output files.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant