-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support "expected failures" #15
Comments
This has my vote! For now, I just hack in #if 0 / #endif around works in progress. |
I'm open to suggestions on how the API should look. So far, I have three possibilities:
One nice thing about (2) and (3) is that it'd be easy to decide at runtime whether to xfail a test in the suite builder (e.g. based on the current environment variables). I'm sort of leaning towards (2) right now, provided I can decide what to do with filters. Maybe add all-new options that say "xfail tests with these attributes" and "don't xfail tests with these other attributes"? Maybe nothing for now? |
You can also use the mettle/test/posix/test_subprocess.cpp Lines 83 to 88 in 0952f37
|
@darenw What are your opinions on what should happen if an xfailed test actually passes? Should that be treated as ok, or should it report an error? |
I'd assume such a test is a work in progress, or in need of debugging (tomorrow maybe), or is testing a part of my project which is a work in progress or needing debugging, so the test passing or not passing is pure accident. If I had a specific reason to expect a test to always fail until something is fixed, that's just a regular ol' fail. But if it's a nuisance, two things I'd think of doing: 1) hack in a logical negation in the test, and hope to remember it's there later, or 2) disable or comment out the test for now, and hope to remember to restore it later. You can guess what the problem is with my approach. In an ideal world, I'd make the test "temporarily disabled" and have it show not red or green, but violet or brown or gray or something. Then it's visible, and can't be mistaken for a 'pass', and can't hold balk up the assembly line, so to speak, for coming up 'fail'. But I wouldn't be interested in verifying that the unmodified test always come up "fail", in any case. |
Hmm. On the one hand, the benefit of reporting an error if an xfailed test passes is that it alerts you to the fact that you fixed the test. However, that doesn't help if the test is intermittently failing (e.g. because of a race condition).
This is actually very close to how skipped tests work now. If you skip a test, it shows up as blue and is reported at the end alongside any failures. However, skipped tests don't cause the whole run to fail. For instance, see the OSX log from Travis. If I do support xfailed tests, I want them to be sufficiently different from skipped tests that there's a reason for both to exist. Maybe just having xfailed tests get run (and not caring about the result) is enough of a difference from skipped tests? |
It'd be nice to be able to support expected failures so that it's easy to keep your tests green even when something isn't quite finished (e.g. you haven't ported feature X to platform Y).
The text was updated successfully, but these errors were encountered: