-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test suite tidy #319
Test suite tidy #319
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #319 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 33 33
Lines 2807 2805 -2
=========================================
- Hits 2807 2805 -2
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See some comments inline.
@pytest.mark.xfail('astropy' not in sys.modules, | ||
raises=ImportError, | ||
reason="requires astropy package") | ||
@astropy_mark_xfail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this test be simply skipped rather than xfailed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't run it, then we miss 100% coverage -- one could write specific tests which check for failure, but I think this is neater.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really miss 100% coverage? Normally our CI should be set up such that extras: true
runs all tests without any skipping which gives us the 100% coverage. extras: false
tests the minimal anesthetic version and is allowed to have less than 100% coverage.
That is why the coverage tests often initially show up as failed, because the faster extras: false
CI jobs finish first with insufficient coverage and we have to wait for the first extras: true
to run through to get 100% coverage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed xfails to skips so you can see the problem at this permalink
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, ok, thanks. It is the try-except blocks in those tests, where we provide informative error messages if a package is not installed, e.g.:
try:
import getdist
except ModuleNotFoundError:
raise ImportError("You need to install getdist to use to_getdist")
So in this getdist example the xfail with condition runs the actual test if getdist is installed (where codecov misses the except block) and checks for this ImportError if it is not (where codecov hits the except block).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More comments inline.
@pytest.mark.xfail('astropy' not in sys.modules, | ||
raises=ImportError, | ||
reason="requires astropy package") | ||
@astropy_mark_xfail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really miss 100% coverage? Normally our CI should be set up such that extras: true
runs all tests without any skipping which gives us the 100% coverage. extras: false
tests the minimal anesthetic version and is allowed to have less than 100% coverage.
That is why the coverage tests often initially show up as failed, because the faster extras: false
CI jobs finish first with insufficient coverage and we have to wait for the first extras: true
to run through to get 100% coverage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @williamjameshandley, please squash and merge.
@pytest.mark.xfail('astropy' not in sys.modules, | ||
raises=ImportError, | ||
reason="requires astropy package") | ||
@astropy_mark_xfail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, ok, thanks. It is the try-except blocks in those tests, where we provide informative error messages if a package is not installed, e.g.:
try:
import getdist
except ModuleNotFoundError:
raise ImportError("You need to install getdist to use to_getdist")
So in this getdist example the xfail with condition runs the actual test if getdist is installed (where codecov misses the except block) and checks for this ImportError if it is not (where codecov hits the except block).
Many thanks @lukashergt |
Description
This PR begins a long road to upgrading the test suite with more systematic parametrisation. First step is to move to skipping tests rather than using 'if' statements, which the first commit does.
Fixes #110
Checklist:
flake8 anesthetic tests
)pydocstyle --convention=numpy anesthetic
)python -m pytest
)