Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unit Test test_experiment_result tolerance check fails in main and varies based on development setup/platform. #29

Open
danielbarcklow opened this issue Jan 5, 2024 · 0 comments

Comments

@danielbarcklow
Copy link
Contributor

Unit Test test_experiment_result tolerance check fails in main and varies based on development setup/platform.

This issue can be recreated by removing the @pytest.mark.skip from test_experiment_result in tests/unittests/test_experiment.py. It was recreated locally and also in the github status checks.

def test_experiment_result(config_file):

The differences between values obtained from the test experiment varies from that which is found in test_result.csv. It was observed that the score columns only pass with tolerance 1e-1 or 1e-2. It was also observed that the test results vary across development setup/platform. Modifications are needed to reduce this variability and allow test_experiment_result to pass successfully with an appropriate tolerance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant