Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seeking advice on making test more repeatable #59

Open
jaraco opened this issue Jul 13, 2023 · 1 comment
Open

Seeking advice on making test more repeatable #59

jaraco opened this issue Jul 13, 2023 · 1 comment

Comments

@jaraco
Copy link

jaraco commented Jul 13, 2023

In the zipp project (part of CPython's stdlib), I've employed big O to test the complexity of zipp.CompleteDirs._implied_dirs. The tests fail intermittently, particularly on slower platforms (macOS, Windows, PyPy) but also even on Ubuntu in CI. The tests succeed fairly reliably locally. I'm fairly confident the implementation is linear, but sometimes the test reports linearithmic or cubic.

I'm wondering if you have any advice on how I might be able to tune the test to be more reliable. I'm wondering if garbage collection is at play, or maybe the problem is simply the variability from using shared compute resources.

@pberkes
Copy link
Owner

pberkes commented Jul 21, 2023

hi @jaraco normally you should be able to make the tests more robust by increasing the n_measurements, n_repeats, and n_timings .

Looking at the zipp code, it seems that those arguments are left at their default values, which means 10 measurements, 1 repeat and 1 timing per repeat.

Increasing the number of measurements between n_min and n_max would improve the accuracy of the complexity estimate, but usually robustness is achieved with higher n_repeats: big_o takes the minimum of the repeats, making sure that each repetition is not influenced by temporary system load, or initial caching , or similar.

Let me know if increasing these values helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants