Skip to content

OSDOCS-14810:Removed 'at least' wording for control plane and infra nodes in ROSA Classic cluster failure doc. #95079

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 30, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions modules/rosa-policy-failure-points.adoc
Original file line number Diff line number Diff line change
@@ -40,9 +40,9 @@ When accounting for possible node failures, it is also important to understand h
ifndef::openshift-rosa-hcp[]
[id="rosa-policy-container-cluster-failure_{context}"]
== Cluster failure
Single-AZ ROSA clusters have at least three control plane and two infrastructure nodes in the same availability zone (AZ) in the private subnet.
Single-AZ ROSA clusters have three control plane nodes and two infrastructure nodes in the same availability zone (AZ) in the private subnet.

Multi-AZ ROSA clusters have at least three control plane nodes and three infrastructure nodes that are preconfigured for high availability, either in a single zone or across multiple zones, depending on the type of cluster you have selected. Control plane and infrastructure nodes have the same resiliency as worker nodes, with the added benefit of being managed completely by Red{nbsp}Hat.
Multi-AZ ROSA clusters have three control plane nodes and three infrastructure nodes that are preconfigured for high availability, one in each AZ. Control plane and infrastructure nodes have the same resiliency as worker nodes, with the added benefit of being managed completely by Red{nbsp}Hat.

In the event of a complete control plane outage, the OpenShift APIs will not function, and existing worker node pods are unaffected. However, if there is also a pod or node outage at the same time, the control planes must recover before new pods or nodes can be added or scheduled.