CMP-3824: Added logic to cover 27967, 33782, 33711, 47346 from downstream -WIP#1071
CMP-3824: Added logic to cover 27967, 33782, 33711, 47346 from downstream -WIP#1071taimurhafeez wants to merge 5 commits intoComplianceAsCode:masterfrom
Conversation
|
Hi @taimurhafeez. Thanks for your PR. I'm waiting for a ComplianceAsCode member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
@taimurhafeez: This pull request references CMP-3824 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: taimurhafeez The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
🤖 To deploy this PR, run the following command: |
|
🤖 To deploy this PR, run the following command: |
|
/test e2e-aws-serial-arm |
# Conflicts: # tests/e2e/framework/common.go
|
🤖 To deploy this PR, run the following command: |
|
🤖 To deploy this PR, run the following command: |
rhmdnd
left a comment
There was a problem hiding this comment.
Some comments inline. Thanks!
| if err != nil { | ||
| t.Fatal(err) | ||
| // If it's not NON-COMPLIANT, check if it's COMPLIANT (all rules already compliant) | ||
| err2 := f.WaitForSuiteScansStatus(f.OperatorNamespace, suiteName, compv1alpha1.PhaseDone, compv1alpha1.ResultCompliant) |
There was a problem hiding this comment.
Looks like we need that PR that adds multiple results to the WaitForSuiteScansStatus do avoid the double loop.
@Anna-Koudelkova is adding that in another testing PR, and I think you are too. I wonder if we should just pull that out so we can merge that and then rebase.
Otherwise, in the worst case we wait the full time out for a compliance result.
| // This allows the test to work regardless of which rules are non-compliant in the environment | ||
| remediationList := &compv1alpha1.ComplianceRemediationList{} | ||
| listOpts := client.ListOptions{ | ||
| Namespace: f.OperatorNamespace, |
There was a problem hiding this comment.
Could add additional filtering here to only get remediations for the suite in question, but not a real big deal since this is serial and there should only be one suite running at a time.
listOpts := client.ListOptions{
Namespace: f.OperatorNamespace,
LabelSelector: labels.SelectorFromSet(map[string]string{compv1alpha1.SuiteLabel: suiteName}),
}
Completely up to you, it shouldn't affect the test.
| t.Fatal(err) | ||
| } | ||
| appliedRemediations = append(appliedRemediations, updatedRem) | ||
| log.Printf("✓ Remediation %d/%d applied successfully: %s\n", i+1, len(ourRemediations), rem.Name) |
There was a problem hiding this comment.
We don't usually put emojis in test logging. Could refactor this out.
| if checkResult.Status != compv1alpha1.CheckResultPass { | ||
| t.Fatalf("Expected check %s to PASS after remediation, got %s", checkName, checkResult.Status) | ||
| } | ||
| log.Printf("✓ Check %s is PASS\n", checkName) |
| if mc.Name == rem.GetMcName() { | ||
| return false, nil | ||
|
|
||
| for i, rem := range appliedRemediations { |
There was a problem hiding this comment.
Cleaning up the machine configs pools is required for node remediations, but what about any rules that are using API server remediations?
Are we ensuring the remediations we're applying are only using machine configs? I'm wondering if we need to implement something to make sure API server remediations also get unapplied before the next test case runs.
There was a problem hiding this comment.
Actually - looking at the tailored profile, we're only including three RHCOS4 rules, which should be fine for this since those have to use machine configs for remediation.
| Rationale: "Test file-based remediation", | ||
| }, | ||
| { | ||
| // Issue 47346: Array value remediation |
| Rationale: "Test array value remediation with multiple NTP servers", | ||
| }, | ||
| { | ||
| // Issue 27967: Different remediation type (audit rules) |
| t.Fatal(err) | ||
| } | ||
|
|
||
| // Issue 33711: Verify ComplianceRemediation shows "Applied" status |
| if err != nil { | ||
| t.Fatal(err) | ||
| } | ||
| // Issue 33711: Verify all checks pass after remediation (Applied → PASS transition) |
|
@taimurhafeez: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
No description provided.