Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider switching from combinatorial unit tests to hypothesis testing #23

Open
sgbaird opened this issue Feb 8, 2024 · 1 comment
Open

Comments

@sgbaird
Copy link
Owner

sgbaird commented Feb 8, 2024

It's starting to get unwieldy with the number of individual scripts, notebooks, and tests ("1296 changed files") and ~135k additions/deletions. I may even need to consider storing the files elsewhere or leaving them untracked and generating them on-the-fly during CI.

Probably via: https://hypothesis.readthedocs.io/en/latest/

Recommended by @mseifrid in the context of honegumi.

@sgbaird
Copy link
Owner Author

sgbaird commented Feb 25, 2024

Perhaps still generate all unit tests (i.e., all combinations), but only select from those as determined by the hypothesis testing. Likewise, it would also be possible to have a unit test that generates the test scripts on the fly and then dynamically runs them. Kind of this weird inception of unit tests.

For example, with 19 runners per python version (2 cores per runner), and two python versions, it's taking ~16 minutes for unit tests to run with the linear constraint PR, which is close to double that of the last PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant