-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add additional test checks #216
Conversation
I feel like AVC and test-inspector represent two distinct classes of checks, actually. Checking for AVCs is a property of the test, and the decision to either do that manually or default to making the execution environment do that relies on the test itself. test-inspector is more of a runtime choice. I mean, yeah, enabling this in the test can mean "it's checked to be non-destructive and I don't want to regress on that", but I'd say we need a way to enable-disable checks at runtime too. |
hmm I wonder, isn't that runtime disabling then more a details of the check itself? is it something we need to care here? |
65f5e8a
to
2aae1c8
Compare
I hope we'll be fine with non-parametrized checks and boolean enable/disable for a long time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine, would benefit from a command-line override example.
I am not completely sure about the granularity. Is it really necessary to track check per individual test case? Especially for the @milosmalik, what should be the granularity for the AVC check opt-out feature for your use case? Would it be enough to enable/disable the AVC check for the whole set of tests selected in a plan (L2 metadata) or would it be useful to be able to enable/disable checks per individual test case (L1 metadata)? |
This option is ideal for our purposes:
The following option is too broad for our purposes:
|
@psss ok seems so we need it in L1, at least for |
I see, thanks for the feedback, @milosmalik. So it seems for the |
I would argue for a different word with more clear semantics. 'check' sounds too unclear to me. |
So after discussion, seems we will drop this idea because:
|
I would wait a bit, just to make sure more similar use cases do not pop up from other team members. |
Hi, introducing ABRT check would be nice, too. It could work simply by checking, if there's anyting present in /var/spool/abrt and/or by grepping specific patterns in syslog (journal, /var/log/messages) and fail if there's anything found. For proper functionality it would of course need to have ABRT up and running, but since it is preferrable to run jobs with --abrt to have crashes reported to QE FAF, should an arise, it shouldn't be a problem. If ABRT is not installed, the check would do nothing and simply pass. |
@martinky82 |
In wow command line. What it does is that it puts enable-abrt task at the beginning and check-abrt task at the end of a job (enable-abrt being much more important in our workflow). |
Here's how it works:
|
This has been stalled for too long. I believe this will need some more discussion before moving forward. Also the proposed key name |
Failed to load packit config file:
For more info, please check out the documentation or contact the Packit team. |
Failed to load packit config file:
For more info, please check out the documentation or contact the Packit team. |
9485aad
to
18f6995
Compare
Where is the actual AVC chech implemented? I do not see such a code in this PR. |
This was just a kick-off of the specification. |
Add additional checks definition to the spec files and extend the jsonschema with them. Signed-off-by: Miroslav Vadkerti <[email protected]>
18f6995
to
cf83440
Compare
After yesterday's discussion, we agreed that the proposed solution should be more generic and a framework should be created in tmt where the user can use a prepared module or add their own to check various information on the system (AVC, out-of-memory, kernel-panic, logs) . Probably check will be applied after every test, but was also proposed check after each beakerLib phase or after run of the whole test plan. |
results for the plan run. It has the same values as | ||
``TEST_RESULT``. Plan result is counted according to the | ||
``TEST_OUTCOME``. Plan result is counted according to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how changes in this file are related to adding the check
L1 key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure now, feel free to drop it @happz
I'd start hacking on this because it's a feature we'd have use for in downstream, but I'd rather wait till #1679 gets resolved & its components merged. Supporting custom checks, together with custom export plugins, sounds like we do need to generalize plugin loading a bit - it was handling step plugins only until very recently, here already a possible third type of plugin spawned, let's make it a cleaner and less step-ish. I can imagine having a |
@happz thanks, I changed the assignee to you |
Can I somehow help to move this forward? |
If there’s a tmt meeting tomorrow and you’d be able to join, please, remind the developers of the discussion mentioned below.
None that I know if. I’m waiting for literally anyone to comment and help me address the blank spots. |
This PR introduces additional test checks that make sense
to run while runing the tests.
The checks are by default controlled on the test level,
but can be overridden in plans.
Signed-off-by: Miroslav Vadkerti [email protected]