-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change benchmark template to be more impartial. #215
base: main
Are you sure you want to change the base?
Conversation
By defining methods `fast` and `slow`, we lock the benchmark into whatever the results were on a particular Ruby at a particular time. Since the benchmarks are run on multiple versions of Ruby and on multiple Ruby interpreters, which variant is faster is subject to change.
I think that from a more holistic view,
I say holistic, because there are some pieces missing:
Personally, I think that the bigger solution is to increase the automated testing until it can provide data from all different environments and not only show outliers but also flag times where @nirvdrum I’d love to hear your thoughts or rebuttals in response to my thoughts. I’m much more interested in the betterment of this project than in any specific suggestion I have. |
I forgot to mention this is a follow-up to an issue I opened long ago #110. I've looked at many benchmarks over the years and this is the only project that attempts to name the variants in terms of winners. I don't think this is an area we need to try to innovate in. CI currently tests against 13 different Ruby versions and I expect we'll see more configurations to test YJIT or MJIT. They do not all perform the same way. It's confusing when you run into a benchmark where something named I'm also concerned this biases the benchmarks before they're even written because the name needs to be chosen before the benchmark is run. I'm sure someone studious enough is going to go back and rename things once results are available, but having to rename at all introduces a new potential source for errors. It just feels inverted to me. The goal of a benchmark is to evaluate performance of an approach and possibly compare it to a baseline to establish a performance differential. You're testing hypotheses, not trying to document something you already know. Many of the benchmarks in this project are comparing multiple approaches to performing the same logical task. Eliminating that difference is a performance target for each of the Ruby implementations. In very few cases, the benchmarks here are not highlighting a fundamental issue that will hold for all time. This repo is a good source of identifying improvements for Ruby implementations and we should expect the performance profile to change over time. Renaming all existing benchmarks is a large undertaking. That's why I never really made progress on #110, but it'd be helpful to break the pattern for newly added benchmarks, IMHO. |
I forgot to address one of your open questions: there are indeed benchmarks that then add "faster" and "fastest" names. It further confuses the relationships when the performance profile changes. Grepping through the code quickly, we have:
I suppose that hits on another issue with the README. It gives a pattern for only two variants. When there are three or more, it leads the author to assume comparative forms of "fast" and "slow" should be used. That ends up meaning "slow" isn't always the slowest option and "fast" isn't always the fastest. While I don't love the "a", "b", "c"... convention, at least it's reasonably clear what you should do when adding a new variant. Moreover, if modifying an existing benchmark, this naming convention doesn't require renaming all of the others to adapt to the new performance relationship. |
I agree. I think the name should simply reflect method does, whether it's faster or slower is not a property of the method but more of the Ruby optimizations, Ruby implementation, platform, etc, so it's just misleading to name a method |
By defining methods
fast
andslow
, we lock the benchmark into whatever the results were on a particular Ruby at a particular time. Since the benchmarks are run on multiple versions of Ruby and on multiple Ruby interpreters, which variant is faster is subject to change.I don't love the names "a" and "b", so I'm very much open to alternative naming conventions, so long as the method name either describes the variant being benchmarks or has an impartial name that just indicates it's another variant.