You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the zipp project (part of CPython's stdlib), I've employed big O to test the complexity of zipp.CompleteDirs._implied_dirs. The tests fail intermittently, particularly on slower platforms (macOS, Windows, PyPy) but also even on Ubuntu in CI. The tests succeed fairly reliably locally. I'm fairly confident the implementation is linear, but sometimes the test reports linearithmic or cubic.
I'm wondering if you have any advice on how I might be able to tune the test to be more reliable. I'm wondering if garbage collection is at play, or maybe the problem is simply the variability from using shared compute resources.
The text was updated successfully, but these errors were encountered:
hi @jaraco normally you should be able to make the tests more robust by increasing the n_measurements, n_repeats, and n_timings .
Looking at the zipp code, it seems that those arguments are left at their default values, which means 10 measurements, 1 repeat and 1 timing per repeat.
Increasing the number of measurements between n_min and n_max would improve the accuracy of the complexity estimate, but usually robustness is achieved with higher n_repeats: big_o takes the minimum of the repeats, making sure that each repetition is not influenced by temporary system load, or initial caching , or similar.
In the zipp project (part of CPython's stdlib), I've employed big O to test the complexity of
zipp.CompleteDirs._implied_dirs
. The tests fail intermittently, particularly on slower platforms (macOS, Windows, PyPy) but also even on Ubuntu in CI. The tests succeed fairly reliably locally. I'm fairly confident the implementation is linear, but sometimes the test reports linearithmic or cubic.I'm wondering if you have any advice on how I might be able to tune the test to be more reliable. I'm wondering if garbage collection is at play, or maybe the problem is simply the variability from using shared compute resources.
The text was updated successfully, but these errors were encountered: