Skip to content

Commit

Permalink
module-upgrade and intro complete
Browse files Browse the repository at this point in the history
  • Loading branch information
acowles-redhat committed Nov 27, 2024
1 parent 55cd832 commit f2b884c
Show file tree
Hide file tree
Showing 26 changed files with 128 additions and 25 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 6 additions & 6 deletions content/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,14 @@
** xref:module-deploy.adoc#deploy-prereqs[Prerequisites for Deployment]
** xref:module-deploy.adoc#deploy-cluster[Deploy Cluster]
** xref:module-deploy.adoc#explore-cluster[Explore NodePools]
* xref:module-config.adoc[Configure Hosted Cluster]
** xref:module-config.adoc#local-auth[Configure Authenticaion]
** xref:module-config.adoc#test-auth[Test Authentication]
* xref:module-configure.adoc[Configure Hosted Cluster]
** xref:module-configure.adoc#local-auth[Configure Authenticaion]
** xref:module-configure.adoc#test-auth[Test Authentication]
* xref:module-scale.adoc[AutoScale the Cluster]
** xref:module-scale.adoc#deploy-app[Deploy an Application]
** xref:module-scale.adoc#explore-autoscale[Explore AutoScaling]
** xref:module-scale.adoc#clean-up[Delete Application and Scale Down]
* xref:module-update.adoc[Update the Hosted Cluster]
** xref:module-update.adoc#review-update[Review the Update Process]
** xref:module-update.adoc#apply-update[Apply the Cluster Update]
* xref:module-upgrade.adoc[Upgrade the Hosted Cluster]
** xref:module-upgrade.adoc#review-upgrade[Review the Upgrade Process]
** xref:module-upgrade.adoc#apply-upgrade[Apply the Cluster Upgrade]
* xref:module-summary.adoc[Lab Summary]
33 changes: 28 additions & 5 deletions content/modules/ROOT/pages/index.adoc
Original file line number Diff line number Diff line change
@@ -1,21 +1,44 @@
= Introducing OpenShift on OpenShift
= Introducing OpenShift on OpenShift with Hosted Control Planes

This lab seeks to familiarize it's audience with benefits of Red Hat OpenShift on OpenShift as a virtual hosted cluster by using Hosted Control Planes (HCP). Hosted Control Planes (formerly the HyperShift project) are a fairly new addition to self-managed OpenShift workloads, made generally available with the 4.14 release, which allow for a central management cluster to host additional managed sub-clusters in bare metal or virtualized form. This greatly increases the efficiency of an OpenShift deployment, saving on physical resource costs, cluster management overhead, and the total time to deploy, doing so is quite expedited. In addition this lab will explore the advantages of using a centralized management solution for the hosted clustersm which is provided by Advanced Cluster Management for Kubernetes (RHACM).


[[value-prop]]
== Value Prop

*Distributed Clusters for Disaster Recovery:* A single hosted control plane instance deployed to a management or hub cluster could have a number of node pools assigned to the cluster from remote managed locations in disparate data centers, helping to protect against the loss of a cluster in the event of an unplanned disaster or data center outage. Should the worker nodes in one data center become unavailable, the application pods that are non-responsive could be relaunched on additional worker nodes in the cluster that reside in a different location.

*Ease of Cluster Administration:* Service providers or groups acting in such a manner can provision entirely new cluster environments rapidly, without additional overhead. New clusters can be made available to end users in a matter of minutes, and taken down once they are no longer needed. Developers may even be able to request the provisioning of clusters on demand for testing of a specific application in an isolated environment. The physical separation of control planes and worker nodes also allows cluster administrators to update control planes individually or even without updating the worker nodes for a cluster in tandem.

*Reduction of Physical Resources:* A traditional OpenShift deployment requires a minimum of 3 control plane hosts and 2 data plane hosts in order to be deployed as a highly available and fault-tolerant platform. In an environment where a number of separate clusters are being deployed, it would be more cost-effective to be able to deploy only worker nodes for each new cluster instance and to have the control plane for each not be localized but hosted in a centralized data center.

*Secure Multi-Tenancy:* In OpenShift, workloads and their responsibilities are often segregated through projects and access to these projects is governed by user accounts and permissions. While this virtual separation of applications exists, they still share the same level of access to all physical system resources as any other application and or project. The use of hosted control planes to dedicate entirely managed OpenShift clusters, either virtual or physical to projects or end users, offers a cluster the ability to safely separate workloads completely and allows for a truer form of multi-tenancy than traditionally available in OpenShift.

[[value-prop]]
== Value Prop
*Secure Network Segregation:* Due to the natural decoupling of the infrastructure, hosted control planes allow for the control planes themselves to be a part of the network domain of the management cluster, while the worker nodes reside on a different physical network and are able to be a part of that network domain. This arrangement ensures that management traffic is separated from data plane traffic, and that applications deployed do not have access to the control plane through physical separation.


[[arc-con]]
== Architectural Concepts

A hosted control planes cluster can be deployed to either bare metal nodes or virtual nodes provisioned with OpenShift Virtualization. In this lab we will be focused on deployments of virtual OpenShift clusters provided by OpenShift Virtualization.

A hosted control planes cluster saves on physical resources by reducing the number of nodes that need to be deployed to support the cluster. Instead of dedicated control plane nodes for each cluster deployment, only worker nodes are provisioned, as the cluster support services are run in pods in a dedicated namespace for the cluster.

[[arc-con]]
== Architectural Concepts
The following graphic compares a Hosted Control Planes cluster to a traditional OpenShift cluster:

image::intro/hosted_control_planes[link=self, window=blank, width=100%]

As stated previously, with virtual clusters provided by OpenShift Virtualization, administration teams can use a single centralized cluster with physical nodes to deploy a large number of individual clusters for multi-tenant workloads.

This is an example architecture showing a single hosting cluster, and multiple virtual clusters:

image::intro/hcp_v[link=self, window=blank, width=100%]

Such fleet management is greatly eased by the deployment of https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/about/index[Red Hat Advanced Cluster Managment for Kubernetes (RHACM)^] as a part of the solution.

The following graphic shows how a managed cluster depends on the hub cluster for a number of advanced features:

image::intro/acm_overview[link=self, window=blank, width=100%]


[[lab-info]]
Expand Down
14 changes: 0 additions & 14 deletions content/modules/ROOT/pages/module-update.adoc

This file was deleted.

94 changes: 94 additions & 0 deletions content/modules/ROOT/pages/module-upgrade.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
= Upgrade the Hosted Cluster

Another reason to deploy Red Hat Cluster Management for Kuberentes (RHACM) for the deployment and management of hosted clusters is for fleet management. The upgrade process for a hosted control planes cluster is a little different from that of standalone clusters. In this module we will explore the upgrade process, and then apply an upgrade to our cluster.

IMPORTANT: This lab was created in the late fall prior to the release of OpenShift 4.18. While the images in the lab guide may not match exactly the lab environment in front of you, the lab tasks remain and function the same.

[[review-upgrade]]
== Review the Upgrade Process

. Lets start out by logging into our hosted cluster. Select the *Console URL* from the *Cluster details* section of *All Clusters* and click on the link.
+
image::upgrade/cluster_details.png[link=self, window=blank, width=100%]

. Log in with the *myuser* administrative account using the password *R3dH4t1!*.
+
image::upgrade/hosted_cluster_login.png[link=self, window=blank, width=100%]

. You will be presented with the Administrator Overview, but there is something different from a standard cluster. In the *Details* panel take notice of the *Update Channel*.
+
image::upgrade/admin_overview.png[link=self, window=blank, width=100%]

. Let us see if we can configure an update channel to provide updates to our cluster. In the left-side menu click on *Administration* and from the drop down select *Cluster Settings*.
+
image::upgrade/left_menu_cluster_settings.png[link=self, window=blank, width=100%]

. On the *Cluster Settings* page we see that the *Update status* confirms that no channel is configured, we and that we are not able to set the channel, because the control plane is hosted.
+
image::upgrade/update_channel.png[link=self, window=blank, width=100%]

. Close the tab for the hosted cluster, and return to the hosting cluster and the *Cluster details* panel. You will see that there are several ways to initiate the cluster upgrade.

. For starters, from the *Cluster details* panel, and the *Actions* drop down menu available there.
+
image::upgrade/cluster_details_upgrade.png[link=self, window=blank, width=100%]

. If we scroll up the page, we will see another optional place to kick off the upgrade process.
+
image::upgrade/control_plane_status_upgrade.png[link=self, window=blank, width=100%]

. And if we migrate to the very top of the clusters view we find two more ways to update our cluster specifically, both with the *Distribution version* column, and by clicking on the three-dot menu.
+
image::upgrade/cluster_list_upgrade.png[link=self, window=blank, width=100%]

. Something else you may notice from this screen as well, is the ability for full fleet upgrade, provided by RHACM. By selecting the check box next to each cluster you want to upgrade you can select upgrade channels for each, and schedule them all to upgrade simultaneously, or at specific intervals.
+
image::upgrade/multi_cluster_upgrade.png[link=self, window=blank, width=100%]


[[apply-upgrade]]
== Apply the Cluster Upgrade

. Now that we have explored how to being our cluster upgrade process from our hosted cluster environment, lets kick off an upgrade process.

. Starting from the *Cluster list* lets click on the link for *Upgrade available* for our hosted cluster.
+
image::upgrade/upgrade_available.png[link=self, window=blank, width=100%]

. A new window appears with a drop-down menu allowing you to select from a number of acceptable release versions, from the latest z release of your current version, to the latest version of OpenShift available. Select the latest version, in our case 4.17.6, and click the blue *upgrade* button.
+
image::upgrade/upgrade_version.png[link=self, window=blank, width=100%]
+
NOTE: If you notice, it's quite possible to select an upgrade version for your hosted cluster that is greater than your hosting cluster. This option gives you maximum flexibility for your deployments.

. We see the message under *Distribution version* has a rolling wheel and a message that we are currently upgrading. If we want additional details about the process, we can click on *my-hosted-cluster*.
+
image::upgrade/cluster_upgrading.png[link=self, window=blank, width=100%]

. On the *Control plane status* we see the same rolling wheel and upgrading message, as well as live updates as each control plane component is upgraded.
+
image::upgrade/control_plane_status_upgrading.png[link=self, window=blank, width=100%]

. The upgrade process can take several minutes, but you will find that it is often much quicker than upgrading a full OpenShift cluster.

. You can also see that it follows strict procedure while upgrading to cycle through control plane components one at a time to ensure cluster availability.
+
image::upgrade/kube_api_degraded.png[link=self, window=blank, width=100%]

. Along the way we will recieve live updates as the upgrade process progresses.
+
image::upgrade/cluster_version_progressing_1.png[link=self, window=blank, width=100%]
+
image::upgrade/cluster_version_progressing_2.png[link=self, window=blank, width=100%]

. When the upgrade is complete we will see the *Control plane status* update to show the current version.
+
image::upgrade/control_plane_upgrade_complete.png[link=self, window=blank, width=100%]

. We can also login to our hosted cluster and see that it shows the upgraded version on the Administrator overview console.
+
image::upgrade/admin_overview_upgrade_complete.png[link=self, window=blank, width=100%]

== Summary

In this module we explored how the upgrade of an OpenShift on OpenShift with Hosted Control Planes cluster differs from a standalone deployment. After exploring our various upgrade options we kicked off an upgrade process to the latest version of OpenShift.

0 comments on commit f2b884c

Please sign in to comment.