Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The problem I'm addressing is that of not knowing what tests you will want to skip when you are writing the code.
We do some legacy tests on nightly. These legacy tests run older versions of the CLI against the current version of stratisd to try to detect if some failure has crept in. But sometimes, the tests are too precise, and FAIL with the new version of stratisd even when we consider the stratis-cli behavior to be correct. This can happen if, for example, we change the exception that is raised from StratisCliEngineError to StratisCliUserError, that is, we run a pre-check in stratis-cli of what we previously expected the engine to return an error for. In both cases, stratis-cli will exit with an error, and we consider this acceptable.
This approach will put in place a framework that will work for skipping any test that is desired to skip in future. It will be necessary to annotate every test with the skipIf decorator now and for future tests so that this approach can work in future.