-
Notifications
You must be signed in to change notification settings - Fork 169
OCPBUGS-50709,OCPBUGS-62262: DownStream Merge [11-06-2025] #2846
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Nadia Pinaeva <[email protected]>
Signed-off-by: Yun Zhou <[email protected]>
Add support for configuring OVN/OVS file paths through CLI flags and
config file, allowing deployment flexibility across different systems.
New OvnAuthConfig fields (configurable via CLI/config):
- RunDir: OVN runtime directory (default: /var/run/ovn/)
- DbLocation: Database file location (default: /etc/ovn/ovn{n,s}b_db.db)
New OvsPathConfig struct with fields (configurable via CLI/config):
- RunDir: OVS runtime directory (default: /var/run/openvswitch/)
Updated all hardcoded path references to use centralized config
values.
Signed-off-by: jneo8 <[email protected]>
Replace Kubernetes-specific parameters (clientset, k8sNodeName) with a generic waitTimeoutFunc callback in RegisterOvnDBMetrics and RegisterOvnNorthdMetrics. This enables use in non-Kubernetes contexts such as a standalone OVN exporter. Signed-off-by: jneo8 <[email protected]>
…lity Add empty string checks before parsing probe interval configuration fields to prevent errors when these fields are not set. In OVN 24.09+, probe interval configs (ovn-bridge-remote-probe-interval, ovn-remote-probe-interval) may not be set as they are disabled by default. Reference: https://www.ovn.org/en/releases/24.09/ Signed-off-by: jneo8 <[email protected]>
Signed-off-by: Dumitru Ceara <[email protected]>
Signed-off-by: arkadeepsen <[email protected]>
Signed-off-by: arkadeepsen <[email protected]>
…ronment-variable-master feat: get ovs/ovn file path from environment variable
Bump OVN to 25.09.
no nft/ipt operation on dpu node
Signed-off-by: Yun Zhou <[email protected]>
Signed-off-by: Yun Zhou <[email protected]>
EgressFirewall objects were retaining managedFields entries for nodes that had been deleted. When a node was deleted, the cleanup logic would apply an empty status object using the deleted node as the field manager. This incorrectly signaled to the API server that the manager was now managing an empty status, leaving a stale entry in managedFields. This change corrects the cleanup logic in cleanupStatus. The manager for the deleted node now applies an EgressFirewall configuration that completely omits the status field. This correctly signals that the manager is giving up ownership of the field to the server-side apply mechanism, causing the API server to remove the manager's entry from managedFields. This prevents the buildup of stale data in etcd for large clusters with frequent node churn. Applying the same logic to the other resource types using status manager: ANP, APBRoute, EgressQoS, NetworkQoS, EgressService. Signed-off-by: Riccardo Ravaioli <[email protected]>
[okep] Add note on local-port tunnel keys.
Resources with a condition-based status (EgressQoS, NetworkQoS) store the zone name
in the condition Type field ("Ready-In-Zone-$zoneName"), but not in the
message field. This caused cleanup to fail because GetZoneFromStatus()
couldn't extract the zone name from the message.
Fix this by transforming the output of getMessages() by
extracting the zone from the condition and prepending it to the returned message:
"$zoneName: message", matching the format used by message-based resources (EgressFirewalls, AdminPolicyBasedExternalRoutes).
This also fixes needsUpdate(), which now properly detects zone-specific changes, since it compares messages that include the zone name.
Signed-off-by: Riccardo Ravaioli <[email protected]>
When zones are deleted, empty ApplyStatus patches are sent to remove status ownership. Due to a previous bug, these patches left behind stale managedFields entries with signature {"f:status":{}}.
This commit adds a one-time startup cleanup that detects and removes these stale entries by checking if managedFields have an empty status and belong to zones that no longer exist. The purpose is to distinguish managedFields that belong to deleted zones from managedFields that belong to external clients (e.g. kubectl). The cleanup runs once when the status manager starts and zones are first discovered.
Also added unit test to verify the startup cleanup logic.
Signed-off-by: Riccardo Ravaioli <[email protected]>
add management port device allocator and accelerated primary udn management port support
ANP/BANP don't use a typed status manager, let's add a startup clean up explicitly to remove any stale managed fields that might be present from previous versions. Signed-off-by: Riccardo Ravaioli <[email protected]>
During an ovnkube-controller restart, pod add/remove events for EgressIP-served pods may occur before the factory.egressIPPod handler is registered in the watch factory. As a result, the EIP controller never able to handle pod delete, leading to stale logical router policy (LRP) entry. Scenario: ovnkube-controller starts. The EIP controller processes the namespace add event (oc.WatchEgressIPNamespaces) and creates an LRP entry for the served pod. The pod is deleted. The factory.egressIPPod handler registration happens afterward via oc.WatchEgressIPPods. The pod delete event is never processed by the EIP controller. Fix: 1. Start oc.WatchEgressIPPods followed by oc.WatchEgressIPNamespaces. 2. Sync EgressIPs before registering factory.egressIPPod event handler. 3. Removal of Sync EgressIPs for factory.EgressIPNamespaceType which is no longer needed. Signed-off-by: Periyasamy Palanisamy <[email protected]>
When the EIP controller cleans up a stale EIP assignment for a pod, it also removes the pod object from the podAssignment cache. This is incorrect, as it prevents the EIP controller from processing the subsequent pod delete event. Scenario: 1. pod-1 is served by eip-1, both hosted on node1. 2. node1’s ovnkube-controller restarts. 3. Pod add event is received by the EIP controller — no changes. 4. eip-1 moves from node1 to node0. 5. The EIP controller receives the eip-1 add event. 6. eip-1 cleans up pod-1’s stale assignment (SNAT and LRP) for node1, but removes the pod object from the podAssignment cache when no other assignments found. 7. The EIP controller programs the LRP entry with node0’s transit IP as the next hop, but the pod assignment cache is not updated with new podAssignmentState. 8. The pod delete event is received by the EIP controller but ignored, since the pod object is missing from the assignment cache. So this commit fixes the issue by adding podAssignmentState back into podAssignment cache at step 7. Signed-off-by: Periyasamy Palanisamy <[email protected]>
new target lint-run-natively will now pull the golangci-lint binary based on the version in lint.sh locally and run the lint check. This is useful in the case that some system does not have a container runtime available. Signed-off-by: Jamo Luhrsen <[email protected]>
status manager: remove managedFields for deleted zone upon zone deletion
|
/ok-to-test |
|
@openshift-pr-manager[bot]: This pull request explicitly references no jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@openshift-pr-manager[bot]: trigger 5 job(s) of type blocking for the ci release of OCP 4.21
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/bad95d30-bb3a-11f0-9a6b-19305e30371a-0 trigger 13 job(s) of type blocking for the nightly release of OCP 4.21
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/bad95d30-bb3a-11f0-9a6b-19305e30371a-1 |
|
Skipping CI for Draft Pull Request. |
|
@openshift-pr-manager[bot]: This pull request references Jira Issue OCPBUGS-50709, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. This pull request references Jira Issue OCPBUGS-62262, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/jira refresh |
|
@pperiyasamy: This pull request references Jira Issue OCPBUGS-50709, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: This pull request references Jira Issue OCPBUGS-62262, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/payload-job periodic-ci-openshift-release-master-ci-4.21-e2e-aws-upgrade-ovn-single-node |
|
@jcaamano: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/10931f60-bbd9-11f0-803e-f249359ba1ef-0 |
|
/payload-job periodic-ci-openshift-release-master-ci-4.21-e2e-aws-upgrade-ovn-single-node |
|
@jluhrsen: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/bf8d15a0-bc08-11f0-9bbe-2aa9b11023f7-0 |
|
/verified by ci |
|
@jluhrsen: This PR has been marked as verified by In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/payload-job periodic-ci-openshift-release-master-ci-4.21-e2e-aws-upgrade-ovn-single-node |
|
@pperiyasamy: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/1924d300-be20-11f0-8f49-6a3e403ff6d5-0 |
|
/test 4.21-upgrade-from-stable-4.20-e2e-aws-ovn-upgrade-ipsec |
|
@openshift-pr-manager[bot]: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
The |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jcaamano, openshift-pr-manager[bot] The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@openshift-pr-manager[bot]: Jira Issue Verification Checks: Jira Issue OCPBUGS-50709 Jira Issue OCPBUGS-50709 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓 Jira Issue Verification Checks: Jira Issue OCPBUGS-62262 Jira Issue OCPBUGS-62262 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
Automated merge of upstream/master → master.