-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should pyperf
detect when CPython is using a JIT?
#683
Comments
I'm not sure it's relevant to CPython's JIT (@brandtbucher should probably weigh in there), but it seems like this code is intended to do more repetitions in the same process to increase the likelihood of code warming up. I think as an experiment, it's probably worth turning this on for a JIT build and seeing what happens to the numbers. My broader concern would be whether this introduces more uncontrolled variables between JIT and non-JIT runs. A big part of what we want to answer is "is CPython faster with the JIT enabled than without" and if the code being run is different, I worry that would muddy the answer (even if it was mathematically compensated for). |
Yeah, let's not do this (at least not until the JIT is on in most builds and we can do this for every "CPython" run). I've never been a huge fan of the tendency to let JITs "warm up" before running benchmarks, since it's comparing one implementation's peak performance against another's "average" performance. Pyperf already does a bit of warmup for us anyways to populate caches and such, so I'm not sure we have much to gain by just increasing how much warmup we're allowing ourselves when measuring these things. |
I might be interested in just seeing if there's a perf difference running CPython under both modes, with the JIT. We work pretty hard to avoid an expensive warmup period, so it could be validating to see that they're both similar. |
IMO, "warmup" periods are a kind of cheating; a way for heavyweight JITs, like Graal or LLVM based compilers, to claim better performance than they really have. A single iteration of the whole benchmark as a warmup makes sense as it warms up though. |
Poking around in
pyperf
, I see that it has some hardcoded options for whether a particular implementation has a JIT or not:_utils.py:192-200
The upshot is that implementations with a JIT are run with fewer total processes, but with more values extracted per process:
_runner.py: 100-114
I imagine this is mostly only relevant if one desires to compare across implementations, but I am curious what the effect of running with fewer processes/more values would be on measured JIT performance versus base CPython. Or if this is even relevant to CPython's JIT.
The text was updated successfully, but these errors were encountered: