diff --git a/asciidoc/day2/downstream-cluster-helm.adoc b/asciidoc/day2/downstream-cluster-helm.adoc index b6f079e9..0d02a08b 100644 --- a/asciidoc/day2/downstream-cluster-helm.adoc +++ b/asciidoc/day2/downstream-cluster-helm.adoc @@ -16,7 +16,11 @@ endif::[] ==== The below sections focus on using `Fleet` functionalities to achieve a Helm chart update. -Users adopting a third-party GitOps workflow, should take the configurations for their desired helm chart from its `fleet.yaml` located at `fleets/day2/chart-templates/`. *Make sure you are retrieving the chart data from a valid "Day 2" Edge link:https://github.com/suse-edge/fleet-examples/releases[release].* +For use-cases, where a third party GitOps tool usage is desired, see: + +* For `EIB deployed Helm chart upgrades` - <>. + +* For `non-EIB deployed Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <> page and populate the chart version and URL in your third party GitOps tool. ==== === Components @@ -31,13 +35,13 @@ Depending on what your environment supports, you can take one of the following o . Host your chart's Fleet resources on a local Git server that is accessible by your `management cluster`. -. Use Fleet's CLI to link:https://fleet.rancher.io/bundle-add#convert-a-helm-chart-into-a-bundle[convert a Helm chart into a Bundle] that you can directly use and will not need to host somewhere. Fleet's CLI can be retrieved from their link:https://github.com/rancher/fleet/releases[release] page, for Mac users there is a link:https://formulae.brew.sh/formula/fleet-cli[fleet-cli] Homebrew Formulae. +. Use Fleet's CLI to link:https://fleet.rancher.io/bundle-add#convert-a-helm-chart-into-a-bundle[convert a Helm chart into a Bundle] that you can directly use and will not need to be hosted somewhere. Fleet's CLI can be retrieved from their link:https://github.com/rancher/fleet/releases[release] page, for Mac users there is a link:https://formulae.brew.sh/formula/fleet-cli[fleet-cli] Homebrew Formulae. ==== Find the required assets for your Edge release version . Go to the Day 2 link:https://github.com/suse-edge/fleet-examples/releases[release] page and find the Edge 3.X.Y release that you want to upgrade your chart to and click *Assets*. -. From the release's *Assets* section, download the following files, which are required for an air-gapped upgrade of a SUSE supported helm chart: +. From the *"Assets"* section, download the following files: + [cols="1,1"] |====== @@ -45,25 +49,25 @@ Depending on what your environment supports, you can take one of the following o |*Description* |_edge-save-images.sh_ -|This script pulls the images in the `edge-release-images.txt` file and saves them to a '.tar.gz' archive that can then be used in your air-gapped environment. +|Pulls the images specified in the `edge-release-images.txt` file and packages them inside of a '.tar.gz' archive. |_edge-save-oci-artefacts.sh_ -|This script pulls the SUSE OCI chart artefacts in the `edge-release-helm-oci-artefacts.txt` file and creates a '.tar.gz' archive of a directory containing all other chart OCI archives. +|Pulls the OCI chart images related to the specific Edge release and packages them inside of a '.tar.gz' archive. |_edge-load-images.sh_ -|This script loads the images in the '.tar.gz' archive generated by `edge-save-images.sh`, retags them and pushes them to your private registry. +|Loads images from a '.tar.gz' archive, retags and pushes them to a private registry. |_edge-load-oci-artefacts.sh_ -|This script takes a directory containing '.tgz' SUSE OCI charts and loads all OCI charts to your private registry. The directory is retrieved from the '.tar.gz' archive that the `edge-save-oci-artefacts.sh` script has generated. +|Takes a directory containing Edge OCI '.tgz' chart packages and loads them to a private registry. |_edge-release-helm-oci-artefacts.txt_ -|This file contains a list of OCI artefacts for the SUSE Edge release Helm charts. +|Contains a list of OCI chart images related to a specific Edge release. |_edge-release-images.txt_ -|This file contains a list of images needed by the Edge release Helm charts. +|Contains a list of images related to a specific Edge release. |====== -==== Create the SUSE Edge release images archive +==== Create the Edge release images archive _On a machine with internet access:_ @@ -74,23 +78,28 @@ _On a machine with internet access:_ chmod +x edge-save-images.sh ---- -. Use `edge-save-images.sh` script to create a _Docker_ importable '.tar.gz' archive: +. Generate the image archive: + [,bash] ---- ./edge-save-images.sh --source-registry registry.suse.com ---- -. This will create a ready to load `edge-images.tar.gz` (unless you have specified the `-i|--images` option) archive with the needed images. +. This will create a ready to load archive named `edge-images.tar.gz`. ++ +[NOTE] +==== +If the `-i|--images` option is specified, the name of the archive may differ. +==== -. Copy this archive to your *air-gapped* machine +. Copy this archive to your *air-gapped* machine: + [,bash] ---- scp edge-images.tar.gz @:/path ---- -==== Create a SUSE Edge Helm chart OCI images archive +==== Create the Edge OCI chart images archive _On a machine with internet access:_ @@ -101,23 +110,28 @@ _On a machine with internet access:_ chmod +x edge-save-oci-artefacts.sh ---- -. Use `edge-save-oci-artefacts.sh` script to create a '.tar.gz' archive of all SUSE Edge Helm chart OCI images: +. Generate the OCI chart image archive: + [,bash] ---- ./edge-save-oci-artefacts.sh --source-registry registry.suse.com ---- -. This will create a `oci-artefacts.tar.gz` archive containing all SUSE Edge Helm chart OCI images +. This will create an archive named `oci-artefacts.tar.gz`. ++ +[NOTE] +==== +If the `-a|--archive` option is specified, the name of the archive may differ. +==== -. Copy this archive to your *air-gapped* machine +. Copy this archive to your *air-gapped* machine: + [,bash] ---- scp oci-artefacts.tar.gz @:/path ---- -==== Load SUSE Edge release images to your air-gapped machine +==== Load Edge release images to your air-gapped machine _On your air-gapped machine:_ @@ -135,14 +149,19 @@ podman login chmod +x edge-load-images.sh ---- -. Use `edge-load-images.sh` to load the images from the *copied* `edge-images.tar.gz` archive, retag them and push them to your private registry: +. Execute the script, passing the previously *copied* `edge-images.tar.gz` archive: + [,bash] ---- ./edge-load-images.sh --source-registry registry.suse.com --registry --images edge-images.tar.gz ---- ++ +[NOTE] +==== +This will load all images from the `edge-images.tar.gz`, retag and push them to the registry specified under the `--registry` option. +==== -==== Load SUSE Edge Helm chart OCI images to your air-gapped machine +==== Load the Edge OCI chart images to your air-gapped machine _On your air-gapped machine:_ @@ -169,7 +188,7 @@ tar -xvf oci-artefacts.tar.gz . This will produce a directory with the naming template `edge-release-oci-tgz-` -. Pass this directory to the `edge-load-oci-artefacts.sh` script to load the SUSE Edge helm chart OCI images to your private registry: +. Pass this directory to the `edge-load-oci-artefacts.sh` script to load the Edge OCI chart images to your private registry: + [NOTE] ==== @@ -189,11 +208,6 @@ For K3s, see link:https://docs.k3s.io/installation/registry-mirror[Embedded Regi === Upgrade procedure -[NOTE] -==== -The below upgrade procedure utilises Rancher's <> funtionality. Users using a third-party GitOps workflow should retrieve the chart versions supported by each Edge release from the <> and populate these versions to their third-party GitOps workflow. -==== - This section focuses on the following Helm upgrade procedure use-cases: . <> @@ -212,6 +226,15 @@ Manually deployed Helm charts cannot be reliably upgraded. We suggest to redeplo For users that want to manage their Helm chart lifecycle through Fleet. +This section covers how to: + +. <>. + +. <>. + +. <>. + +[#day2-helm-upgrade-new-cluster-prepare-fleet] ===== Prepare your Fleet resources . Acquire the Chart's Fleet resources from the Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to use @@ -220,13 +243,13 @@ For users that want to manage their Helm chart lifecycle through Fleet. .. *If you intend to use a GitOps workflow*, copy the chart Fleet directory to the Git repository from where you will do GitOps. -.. *Optionally*, if the Helm chart requires configurations to its *values*, edit the `.helm.values` configuration inside the `fleet.yaml` file of the copied directory +.. *Optionally*, if the Helm chart requires configurations to its *values*, edit the `.helm.values` configuration inside the `fleet.yaml` file of the copied directory. -.. *Optionally*, there may be use-cases where you need to add additional resources to your chart's fleet so that it can better fit your environment. For information on how to enhance your Fleet directory, see link:https://fleet.rancher.io/gitrepo-content[Git Repository Contents] +.. *Optionally*, there may be use-cases where you need to add additional resources to your chart's fleet so that it can better fit your environment. For information on how to enhance your Fleet directory, see link:https://fleet.rancher.io/gitrepo-content[Git Repository Contents]. An *example* for the `longhorn` helm chart would look like: -* User Git repository strucutre: +* User Git repository structure: + [,bash] ---- @@ -286,18 +309,20 @@ diff: These are just example values that are used to illustrate custom configurations over the `longhorn` chart. They should *NOT* be treated as deployment guidelines for the `longhorn` chart. ==== +[#day2-helm-upgrade-new-cluster-deploy-fleet] ===== Deploy your Fleet -If the environment supports working with a GitOps workflow, you can deploy your Chart Fleet by either using a link:https://fleet.rancher.io/ref-gitrepo[GitRepo] or link:https://fleet.rancher.io/bundle-add[Bundle]. +If the environment supports working with a GitOps workflow, you can deploy your Chart Fleet by either using a <> or <>. [NOTE] ==== While deploying your Fleet, if you get a `Modified` message, make sure to add a corresponding `comparePatches` entry to the Fleet's `diff` section. For more information, see link:https://fleet.rancher.io/bundle-diffs[Generating Diffs to Ignore Modified GitRepos]. ==== +[#day2-helm-upgrade-new-cluster-deploy-fleet-gitrepo] ====== GitRepo -Fleet's GitRepo resource holds information on how to access your chart's Fleet resources and to which clusters it needs to apply those resources. +Fleet's link:https://fleet.rancher.io/ref-gitrepo[GitRepo] resource holds information on how to access your chart's Fleet resources and to which clusters it needs to apply those resources. The `GitRepo` resource can be deployed through the link:https://ranchermanager.docs.rancher.com/v2.8/integrations-in-rancher/fleet/overview#accessing-fleet-in-the-rancher-ui[Rancher UI], or manually, by link:https://fleet.rancher.io/tut-deployment[deploying] the resource to the `management cluster`. @@ -326,9 +351,10 @@ spec: - clusterSelector: {} ---- +[#day2-helm-upgrade-new-cluster-deploy-fleet-bundle] ====== Bundle -`Bundle` resources hold the raw Kubernetes resources that need to be deployed by Fleet. Normally it is encouraged to use the `GitRepo` approach, but for use-cases where the environment is air-gapped and cannot support a local Git server, `Bundles` can help you in propagating your Helm chart Fleet to your target clusters. +link:https://fleet.rancher.io/bundle-add[Bundle] resources hold the raw Kubernetes resources that need to be deployed by Fleet. Normally it is encouraged to use the `GitRepo` approach, but for use-cases where the environment is air-gapped and cannot support a local Git server, `Bundles` can help you in propagating your Helm chart Fleet to your target clusters. The `Bundle` can be deployed either through the Rancher UI (`Continuous Delivery -> Advanced -> Bundles -> Create from YAML`) or by manually deploying the `Bundle` resource in the correct Fleet namespace. For information about Fleet namespaces, see the upstream link:https://fleet.rancher.io/namespaces#gitrepos-bundles-clusters-clustergroups[documentation]. @@ -392,6 +418,7 @@ kubectl apply -f longhorn-bundle.yaml Following these steps will ensure that `Longhorn` is deployed on all of the specified target clusters. +[#day2-helm-upgrade-new-cluster-manage-chart] ===== Managing the deployed Helm chart Once deployed with Fleet, for Helm chart upgrades, see <>. @@ -399,18 +426,18 @@ Once deployed with Fleet, for Helm chart upgrades, see <>. +. Determine the version to which you need to upgrade your chart so that it is compatible with the desired Edge release. Helm chart version per Edge release can be viewed from the <>. -. In your Fleet monitored Git repository, edit the Helm chart's `fleet.yaml` file with the correct chart *version* and *repository* from the <>. +. In your Fleet monitored Git repository, edit the Helm chart's `fleet.yaml` file with the correct chart *version* and *repository* from the <>. -. After commiting and pushing the changes to your repository, this will trigger an upgrade of the desired Helm chart +. After committing and pushing the changes to your repository, this will trigger an upgrade of the desired Helm chart [#day2-helm-upgrade-eib-chart] ==== I would like to upgrade an EIB deployed Helm chart -EIB deploys Helm charts by creating a `HelmChart` resource and utilising the `helm-controller` introduced by the link:https://docs.rke2.io/helm[RKE2]/link:https://docs.k3s.io/helm[K3s] Helm integration feature. +EIB deploys Helm charts by creating a `HelmChart` resource and utilizing the `helm-controller` introduced by the link:https://docs.rke2.io/helm[RKE2]/link:https://docs.k3s.io/helm[K3s] Helm integration feature. -To ensure that an EIB deployed Helm chart is successfully upgraded, users would need to do an upgrade over the `HelmChart` resources created for the Helm chart by EIB. +To ensure that an EIB deployed Helm chart is successfully upgraded, users need to do an upgrade over the `HelmChart` resources created for the Helm chart by EIB. Below you can find information on: @@ -442,7 +469,7 @@ image::day2_helm_chart_upgrade_diagram.png[] . <> detects the deployed resource, parses its data and deploys its resources to the specified target clusters. The most notable resources that are deployed are: -.. `eib-charts-upgrader` - a Job that deployes the `Chart Upgrade Pod`. The `eib-charts-upgrader-script` as well as all `helm chart upgrade data` secrets are mounted inside of the `Chart Upgrade Pod`. +.. `eib-charts-upgrader` - a Job that deploys the `Chart Upgrade Pod`. The `eib-charts-upgrader-script` as well as all `helm chart upgrade data` secrets are mounted inside of the `Chart Upgrade Pod`. .. `eib-charts-upgrader-script` - a Secret shipping the script that will be used by the `Chart Upgrade Pod` to patch an existing `HelmChart` resource. @@ -461,7 +488,7 @@ image::day2_helm_chart_upgrade_diagram.png[] [#day2-helm-upgrade-eib-chart-upgrade-steps] ===== Upgrade Steps -. Clone the link:https://github.com/suse-edge/fleet-examples[suse-edge/fleet-examples] repository from the Edge link:https://github.com/suse-edge/fleet-examples/releases[relase tag] that you wish to use. +. Clone the link:https://github.com/suse-edge/fleet-examples[suse-edge/fleet-examples] repository from the Edge link:https://github.com/suse-edge/fleet-examples/releases[release tag] that you wish to use. . Create a directory in which you will store the pulled Helm chart archive(s). + @@ -481,7 +508,7 @@ helm pull [chart URL | repo/chartname] # helm pull [chart URL | repo/chartname] --version 0.0.0 ---- -. From the desired link:https://github.com/suse-edge/fleet-examples/releases[relase tag] download the `generate-chart-upgrade-data.sh` script +. From the desired link:https://github.com/suse-edge/fleet-examples/releases[release tag] download the `generate-chart-upgrade-data.sh` script. . Execute the `generate-chart-upgrade-data.sh` script: + @@ -511,7 +538,7 @@ The script will go through the following logic: ... The `base64` encoded Helm chart archive that will be used to replace the `HelmChart's` currently running configuration. -.. Each `Kubernetes Secret YAML` resource will be transferted to the `base/secrets` directory inside of the path to the `eib-charts-upgrader` Fleet that was given under `--fleet-path`. +.. Each `Kubernetes Secret YAML` resource will be transferred to the `base/secrets` directory inside of the path to the `eib-charts-upgrader` Fleet that was given under `--fleet-path`. .. Furthermore the `generate-chart-upgrade-data.sh` script ensures that the secrets that it moved will be picked up and used in the Helm chart upgrade logic. It does that by: @@ -567,11 +594,11 @@ For information on how to map target clusters, see the upstream link:https://fle fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - eib-charts-upgrade > bundle.yaml ---- + -This will create a Bundle (`bundle.yaml`) that will hold all the templated resoruce from the `eib-charts-upgrader` Fleet. +This will create a Bundle (`bundle.yaml`) that will hold all the templated resource from the `eib-charts-upgrader` Fleet. + For more information regarding the `fleet apply` command, see link:https://fleet.rancher.io/cli/fleet-cli/fleet_apply[fleet apply]. + -For more information regaring converting Fleets to Bundles, see link:https://fleet.rancher.io/bundle-add#convert-a-helm-chart-into-a-bundle[Convert a Helm Chart into a Bundle]. +For more information regarding converting Fleets to Bundles, see link:https://fleet.rancher.io/bundle-add#convert-a-helm-chart-into-a-bundle[Convert a Helm Chart into a Bundle]. .. Deploy the `Bundle`. This can be done in one of two ways: @@ -583,6 +610,13 @@ Executing these steps will result in a successfully deployed `GitRepo/Bundle` re For information on how to track the upgrade process, you can refer to the <> section of this documentation. +[IMPORTANT] +==== +Once the chart upgrade has been successfully verified, remove the `Bundle/GitRepo` resource. + +This will remove the no longer necessary upgrade resources from your downstream cluster, ensuring that no future version clashes might occur. +==== + [#day2-helm-upgrade-eib-chart-example] ===== Example @@ -642,11 +676,11 @@ image::day2_helm_chart_upgrade_example_1.png[] Follow the <>: -. Clone the `suse-edge/fleet-example` repository from the `release-3.1.0` tag. +. Clone the `suse-edge/fleet-example` repository from the `release-3.1.1` tag. + [,bash] ---- -git clone -b release-3.1.0 https://github.com/suse-edge/fleet-examples.git +git clone -b release-3.1.1 https://github.com/suse-edge/fleet-examples.git ---- . Create a directory where the `Longhorn` upgrade archive will be stored. @@ -670,7 +704,7 @@ helm pull rancher-charts/longhorn-crd --version 104.2.0+up1.7.1 helm pull rancher-charts/longhorn --version 104.2.0+up1.7.1 ---- -. Outside of the `archives` directory, download the `generate-chart-upgrade-data.sh` script from the `release-3.1.0` release tag. +. Outside of the `archives` directory, download the `generate-chart-upgrade-data.sh` script from the `release-3.1.1` release tag. . Directory setup should look similar to: + diff --git a/asciidoc/day2/downstream-cluster-k8s.adoc b/asciidoc/day2/downstream-cluster-k8s.adoc index 5cde0b3b..c323fbb7 100644 --- a/asciidoc/day2/downstream-cluster-k8s.adoc +++ b/asciidoc/day2/downstream-cluster-k8s.adoc @@ -68,7 +68,7 @@ An example of defining custom tolerations for the RKE2 *control-plane* SUC Plan, apiVersion: upgrade.cattle.io/v1 kind: Plan metadata: - name: rke2-plan-control-plane + name: rke2-upgrade-control-plane spec: ... tolerations: @@ -99,6 +99,24 @@ spec: This section assumes you will be deploying *SUC Plans* using <>. If you intend to deploy the *SUC Plan* using a different approach, refer to <>. ==== +[IMPORTANT] +==== +For environments previously upgraded using this procedure, users should ensure that *one* of the following steps is completed: + +* `Remove any previously deployed SUC Plans related to older Edge release versions from the downstream cluster` - can be done by removing the desired _downstream_ cluster from the existing `GitRepo/Bundle` target configuration, or removing the `GitRepo/Bundle` resource altogether. + +* `Reuse the existing GitRepo/Bundle resource` - can be done by pointing the resource's revision to a new tag that holds the correct fleets for the desired `suse-edge/fleet-examples` link:https://github.com/suse-edge/fleet-examples/releases[release]. + +This is done in order to avoid clashes between `SUC Plans` for older Edge release versions. + +If users attempt to upgrade, while there are existing `SUC Plans` on the _downstream_ cluster, they will see the following fleet error: + +[,bash] +---- +Not installed: Unable to continue with install: Plan in namespace exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error.. +---- +==== + The `Kubernetes version upgrade procedure` revolves around deploying *SUC Plans* to downstream clusters. These plans hold information that instructs the *SUC* on which nodes to create Pods which run the `rke2/k3s-upgrade` images. For information regarding the structure of a *SUC Plan*, refer to the https://github.com/rancher/system-upgrade-controller?tab=readme-ov-file#example-plans[upstream] documentation. `Kubernetes upgrade` Plans are shipped in the following ways: @@ -114,7 +132,7 @@ For a full overview of what happens during the _update procedure_, refer to the [#k8s-version-upgrade-overview] ==== Overview -This section aims to describe the full workflow that the *_Kubernetes version upgrade process_* goes throught from start to finish. +This section aims to describe the full workflow that the *_Kubernetes version upgrade process_* goes through from start to finish. .Kubernetes version upgrade workflow image::day2_k8s_version_upgrade_diagram.png[] @@ -127,17 +145,17 @@ Kubernetes version upgrade steps: .. For *GitRepo/Bundle* configuration options, refer to <> or <>. -. The user deploys the configured *GitRepo/Bundle* resource to the `fleet-default` namespace in his `management cluster`. This is done either *manually* or thorugh the *Rancher UI* if such is available. +. The user deploys the configured *GitRepo/Bundle* resource to the `fleet-default` namespace in his `management cluster`. This is done either *manually* or through the *Rancher UI* if such is available. . <> constantly monitors the `fleet-default` namespace and immediately detects the newly deployed *GitRepo/Bundle* resource. For more information regarding what namespaces does Fleet monitor, refer to Fleet's https://fleet.rancher.io/namespaces[Namespaces] documentation. . If the user has deployed a *GitRepo* resource, `Fleet` will reconcile the *GitRepo* and based on its *paths* and *fleet.yaml* configurations it will deploy a *Bundle* resource in the `fleet-default` namespace. For more information, refer to Fleet's https://fleet.rancher.io/gitrepo-content[GitRepo Contents] documentation. -. `Fleet` then proceeds to deploy the `Kubernetes resources` from this *Bundle* to all the targeted `downstream clusters`. In the context of the `Kubernetes version upgrade`, Fleet deploys the following resources from the *Bundle* (depending on the Kubernetes distrubution): +. `Fleet` then proceeds to deploy the `Kubernetes resources` from this *Bundle* to all the targeted `downstream clusters`. In the context of the `Kubernetes version upgrade`, Fleet deploys the following resources from the *Bundle* (depending on the Kubernetes distribution): -.. `rke2-plan-agent`/`k3s-plan-agent` - instructs *SUC* on how to do a Kubernetes upgrade on cluster *_agent_* nodes. Will *not* be interpreted if the cluster consists only from _control-plane_ nodes. +.. `rke2-upgrade-worker`/`k3s-upgrade-worker` - instructs *SUC* on how to do a Kubernetes upgrade on cluster *_worker_* nodes. Will *not* be interpreted if the cluster consists only from _control-plane_ nodes. -.. `rke2-plan-control-plane`/`k3s-plan-control-plane` - instructs *SUC* on how to do a Kubernetes upgrade on cluster *_control-plane_* nodes. +.. `rke2-upgrade-control-plane`/`k3s-upgrade-control-plane` - instructs *SUC* on how to do a Kubernetes upgrade on cluster *_control-plane_* nodes. + [NOTE] ==== @@ -154,7 +172,7 @@ The above *SUC Plans* will be deployed in the `cattle-system` namespace of each .. Kill the `rke2/k3s` process that is running on the node OS - this instructs the *supervisor* to automatically restart the `rke2/k3s` process using the new version. -.. https://kubernetes.io/docs/reference/kubectl/generated/kubectl_uncordon/[Uncordon] cluster node - after the successful Kubernetes distribution upgrade, the node is again marked as `scheduable`. +.. https://kubernetes.io/docs/reference/kubectl/generated/kubectl_uncordon/[Uncordon] cluster node - after the successful Kubernetes distribution upgrade, the node is again marked as `schedulable`. + [NOTE] ==== @@ -166,6 +184,8 @@ With the above steps executed, the Kubernetes version of each cluster node shoul [#k8s-upgrade-suc-plan-deployment] === Kubernetes version upgrade - SUC Plan deployment +This section describes how to orchestrate the deployment of *SUC Plans* related Kubernetes upgrades using Fleet's *GitRepo* and *Bundle* resources. + [#k8s-upgrade-suc-plan-deployment-git-repo] ==== SUC Plan deployment - GitRepo resource @@ -180,50 +200,45 @@ Once deployed, to monitor the Kubernetes upgrade process of the nodes of your ta [#k8s-upgrade-suc-plan-deployment-git-repo-rancher] ===== GitRepo creation - Rancher UI -. In the upper left corner, *☰ -> Continuous Delivery* +To create a `GitRepo` resource through the Rancher UI, follow their official link:https://ranchermanager.docs.rancher.com/integrations-in-rancher/fleet/overview#accessing-fleet-in-the-rancher-ui[documentation]. -. Go to *Git Repos -> Add Repository* +The Edge team maintains ready to use fleets for both link:https://github.com/suse-edge/fleet-examples/tree/release-3.1.1/fleets/day2/system-upgrade-controller-plans/rke2-upgrade[rke2] and link:https://github.com/suse-edge/fleet-examples/tree/release-3.1.1/fleets/day2/system-upgrade-controller-plans/k3s-upgrade[k3s] Kubernetes distributions, that users can add as a `path` for their GitRepo resource. -If you use the `suse-edge/fleet-examples` repository: - -. *Repository URL* - `https://github.com/suse-edge/fleet-examples.git` - -. *Watch -> Revision* - choose a link:https://github.com/suse-edge/fleet-examples/releases[release] tag for the `suse-edge/fleet-examples` repository that you wish to use - -. Under *Paths* add the path to the Kubernetes distribution upgrade Fleets as seen in the release tag: +[IMPORTANT] +==== +Always use this fleets from a valid Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag. +==== -.. For RKE2 - `fleets/day2/system-upgrade-controller-plans/rke2-upgrade` +For use-cases where no custom tolerations need to be included to the `SUC plans` that these fleets ship, users can directly refer the fleets from the `suse-edge/fleet-examples` repository. -.. For K3s - `fleets/day2/system-upgrade-controller-plans/k3s-upgrade` +In cases where custom tolerations are needed, users should refer the fleets from a separate repository, allowing them to add the tolerations to the SUC plans as required. -. Select *Next* to move to the *target* configuration section. *Only select clusters for which you wish to upgrade the desired Kubernetes distribution* +Configuration examples for a `GitRepo` resource using the fleets from `suse-edge/fleet-examples` repository: -. *Create* +* link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/gitrepos/day2/rke2-upgrade-gitrepo.yaml[RKE2] -Alternatively, if you decide to use your own repository to host these files, you would need to provide your repo data above. +* link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/gitrepos/day2/k3s-upgrade-gitrepo.yaml[K3s] [#k8s-upgrade-suc-plan-deployment-git-repo-manual] ===== GitRepo creation - manual -. Choose the desired Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to apply the Kubernetes *SUC upgrade Plans* from (referenced below as `$\{REVISION\}`). - . Pull the *GitRepo* resource: ** For *RKE2* clusters: + [,bash] ---- -curl -o rke2-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/gitrepos/day2/rke2-upgrade-gitrepo.yaml +curl -o rke2-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/gitrepos/day2/rke2-upgrade-gitrepo.yaml ---- ** For *K3s* clusters: + [,bash] ---- -curl -o k3s-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/gitrepos/day2/k3s-upgrade-gitrepo.yaml +curl -o k3s-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/gitrepos/day2/k3s-upgrade-gitrepo.yaml ---- -. Edit the *GitRepo* configuration, under `spec.targets` specify your desired target list. By default the `GitRepo` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any down stream clusters. +. Edit the *GitRepo* configuration, under `spec.targets` specify your desired target list. By default the `GitRepo` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any downstream clusters. ** To match all clusters change the default `GitRepo` *target* to: + @@ -260,8 +275,8 @@ kubectl get gitrepo k3s-upgrade -n fleet-default # Example output NAME REPO COMMIT BUNDLEDEPLOYMENTS-READY STATUS -k3s-upgrade https://github.com/suse-edge/fleet-examples.git release-3.0.1 0/0 -rke2-upgrade https://github.com/suse-edge/fleet-examples.git release-3.0.1 0/0 +k3s-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.1 0/0 +rke2-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.1 0/0 ---- [#k8s-upgrade-suc-plan-deployment-bundle] @@ -278,6 +293,15 @@ Once deployed, to monitor the Kubernetes upgrade process of the nodes of your ta [#k8s-upgrade-suc-plan-deployment-bundle-rancher] ===== Bundle creation - Rancher UI +The Edge team maintains ready to use bundles for both link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml[rke2] and link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml[k3s] Kubernetes distributions that can be used in the below steps. + +[IMPORTANT] +==== +Always use this bundle from a valid Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag. +==== + +To create a bundle through Rancher's UI: + . In the upper left corner, click *☰ -> Continuous Delivery* . Go to *Advanced* > *Bundles* @@ -285,14 +309,15 @@ Once deployed, to monitor the Kubernetes upgrade process of the nodes of your ta . Select *Create from YAML* . From here you can create the Bundle in one of the following ways: ++ +[NOTE] +==== +There might be use-cases where you would need to include custom tolerations to the `SUC plans` that the bundle ships. Make sure to include those tolerations in the bundle that will be generated by the below steps. +==== -.. By manually copying the *Bundle* content to the *Create from YAML* page. Content can be retrieved: - -... For RKE2 - https://raw.githubusercontent.com/suse-edge/fleet-examples/$\{REVISION\}/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml - -... For K3s - https://raw.githubusercontent.com/suse-edge/fleet-examples/$\{REVISION\}/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml +.. By manually copying the bundle content for link:https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml[RKE2] or link:https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml[K3s] from `suse-edge/fleet-examples` to the *Create from YAML* page. -.. By cloning the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository to the desired link:https://github.com/suse-edge/fleet-examples/releases[release] tag and selecting the *Read from File* option in the *Create from YAML* page. From there, navigate to the bundle that you need (`/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml` for RKE2 and `/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml` for K3s). This will auto-populate the *Create from YAML* page with the Bundle content +.. By cloning the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository from the desired link:https://github.com/suse-edge/fleet-examples/releases[release] tag and selecting the *Read from File* option in the *Create from YAML* page. From there, navigate to the bundle that you need (`bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml` for RKE2 and `bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml` for K3s). This will auto-populate the *Create from YAML* page with the bundle content. . Change the *target* clusters for the `Bundle`: @@ -312,25 +337,23 @@ spec: [#k8s-upgrade-suc-plan-deployment-bundle-manual] ===== Bundle creation - manual -. Choose the desired Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to apply the Kubernetes *SUC upgrade Plans* from (referenced below as `$\{REVISION\}`). - . Pull the *Bundle* resources: ** For *RKE2* clusters: + [,bash] ---- -curl -o rke2-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml +curl -o rke2-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml ---- ** For *K3s* clusters: + [,bash] ---- -curl -o k3s-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml +curl -o k3s-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml ---- -. Edit the `Bundle` *target* configurations, under `spec.targets` provide your desired target list. By default the `Bundle` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any down stream clusters. +. Edit the `Bundle` *target* configurations, under `spec.targets` provide your desired target list. By default the `Bundle` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any downstream clusters. ** To match all clusters change the default `Bundle` *target* to: + @@ -376,7 +399,7 @@ rke2-upgrade 0/0 There might be use-cases where users would like to incorporate the Kubernetes upgrade resources to their own third-party GitOps workflow (e.g. `Flux`). -To get the upgrade resources that you need, first determine the he Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag of the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository that you would like to use. +To get the upgrade resources that you need, first determine the Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag of the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository that you would like to use. After that, the resources can be found at: @@ -384,13 +407,13 @@ After that, the resources can be found at: ** For `control-plane` nodes - `fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-control-plane.yaml` -** For `agent` nodes - `fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-agent.yaml` +** For `worker` nodes - `fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-worker.yaml` * For a K3s cluster upgrade: ** For `control-plane` nodes - `fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-control-plane.yaml` -** For `agent` nodes - `fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-agent.yaml` +** For `worker` nodes - `fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-worker.yaml` [IMPORTANT] ==== diff --git a/asciidoc/day2/downstream-cluster-os.adoc b/asciidoc/day2/downstream-cluster-os.adoc index d96b6c2b..214e9ac7 100644 --- a/asciidoc/day2/downstream-cluster-os.adoc +++ b/asciidoc/day2/downstream-cluster-os.adoc @@ -26,9 +26,9 @@ A different link:https://www.freedesktop.org/software/systemd/man/latest/systemd ** First a link:https://en.opensuse.org/SDB:Zypper_usage#Updating_packages[normal package upgrade]. Done in order to ensure that all packages are with the latest version before the migration. Mitigating any failures related to old package version. -** After that it proceeds with the OS migration process by utilising the `zypper migration` command. +** After that it proceeds with the OS migration process by utilizing the `zypper migration` command. -Shipped through a *SUC plan*, which should be located on each *downstream cluster* that is in need of a OS upgrade. +Shipped through a *SUC plan*, which should be located on each *downstream cluster* that is in need of an OS upgrade. === Requirements @@ -59,7 +59,7 @@ An example of defining custom tolerations for the *control-plane* SUC Plan, woul apiVersion: upgrade.cattle.io/v1 kind: Plan metadata: - name: cp-os-upgrade-edge-3XX + name: os-upgrade-control-plane spec: ... tolerations: @@ -85,7 +85,7 @@ spec: _Air-gapped:_ -. *Mirror SUSE RPM repositories* - OS RPM repositories should be locally mirrored so that `os-pkg-update.service/os-migration.service` can have access to them. This can be achieved using link:https://github.com/SUSE/rmt[RMT]. +. *Mirror SUSE RPM repositories* - OS RPM repositories should be locally mirrored so that `os-pkg-update.service/os-migration.service` can have access to them. This can be achieved by using either link:https://documentation.suse.com/sles/15-SP6/html/SLES-all/book-rmt.html[RMT] or link:https://documentation.suse.com/suma/5.0/en/suse-manager/index.html[SUMA]. === Update procedure @@ -94,7 +94,25 @@ _Air-gapped:_ This section assumes you will be deploying the `OS upgrade` *SUC Plan* using <>. If you intend to deploy the *SUC Plan* using a different approach, refer to <>. ==== -The `OS upgrade procedure` revolves around deploying *SUC Plans* to downstream clusters. These plans then hold information about how and on which nodes to deploy the `os-pkg-update.service/os-migration.service`. For information regarding the structure of a *SUC Plan*, refer to the https://github.com/rancher/system-upgrade-controller?tab=readme-ov-file#example-plans[upstream] documentation. +[IMPORTANT] +==== +For environments previously upgraded using this procedure, users should ensure that *one* of the following steps is completed: + +* `Remove any previously deployed SUC Plans related to older Edge release versions from the downstream cluster` - can be done by removing the desired _downstream_ cluster from the existing `GitRepo/Bundle` target configuration, or removing the `GitRepo/Bundle` resource altogether. + +* `Reuse the existing GitRepo/Bundle resource` - can be done by pointing the resource's revision to a new tag that holds the correct fleets for the desired `suse-edge/fleet-examples` link:https://github.com/suse-edge/fleet-examples/releases[release]. + +This is done in order to avoid clashes between `SUC Plans` for older Edge release versions. + +If users attempt to upgrade, while there are existing `SUC Plans` on the _downstream_ cluster, they will see the following fleet error: + +[,bash] +---- +Not installed: Unable to continue with install: Plan in namespace exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error.. +---- +==== + +The `OS upgrade procedure` revolves around deploying *SUC Plans* to downstream clusters. These plans hold information about how and on which nodes to deploy the `os-pkg-update.service/os-migration.service`. For information regarding the structure of a *SUC Plan*, refer to the https://github.com/rancher/system-upgrade-controller?tab=readme-ov-file#example-plans[upstream] documentation. `OS upgrade` SUC Plans are shipped in the following ways: @@ -109,7 +127,7 @@ For a full overview of what happens during the _upgrade procedure_, refer to the [#os-update-overview] ==== Overview -This section aims to describe the full workflow that the *_OS upgrade process_* goes throught from start to finish. +This section aims to describe the full workflow that the *_OS upgrade process_* goes through from start to finish. .OS upgrade workflow image::day2_os_pkg_update_diagram.png[] @@ -130,9 +148,9 @@ OS upgrade steps: . `Fleet` then proceeds to deploy the `Kubernetes resources` from this *Bundle* to all the targeted `downstream clusters`. In the context of `OS upgrades`, Fleet deploys the following resources from the *Bundle*: -.. *Agent SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_agent_* nodes. It is *not* interpreted if the cluster consists only from _control-plane_ nodes. It executes after all control-plane *SUC* plans have completed successfully. +.. *Worker SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_worker_* nodes. It is *not* interpreted if the cluster consists only from _control-plane_ nodes. It executes after all control-plane *SUC* plans have completed successfully. -.. *Control-plane SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_control-plane_* nodes. +.. *Control Plane SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_control-plane_* nodes. .. *Script Secret* - referenced in each *SUC Plan*; ships an `upgrade.sh` script responsible for creating the `os-pkg-update.service/os-migration.service` which will do the actual OS upgrade. @@ -155,15 +173,15 @@ The above resources will be deployed in the `cattle-system` namespace of each do ... For `os-pkg-update.service`: -.... Update all package version on the node OS, by running `transactional-update cleanup up` +.... Update all package versions on the node OS, by running `transactional-update cleanup up` .... After a successful `transactional-update`, schedule a system *reboot* so that the package version updates can take effect ... For `os-migration.service`: -.... Update all package version on the node OS, by running `transactional-update cleanup up`. This is done to ensure that no old package versions causes an OS migration error. +.... Update all package versions on the node OS, by running `transactional-update cleanup up`. This is done to ensure that no old package versions cause an OS migration error. -.... Proceed to migrate the OS to the desired values. Migration is done by utilising the `zypper migration` command. +.... Proceed to migrate the OS to the desired values. Migration is done by utilizing the `zypper migration` command. .... Schedule a system *reboot* so that the migration can take effect @@ -192,37 +210,32 @@ Once deployed, to monitor the OS upgrade process of the nodes of your targeted c [#os-upgrade-suc-plan-deployment-git-repo-rancher] ===== GitRepo creation - Rancher UI -. In the upper left corner, *☰ -> Continuous Delivery* +To create a `GitRepo` resource through the Rancher UI, follow their official link:https://ranchermanager.docs.rancher.com/integrations-in-rancher/fleet/overview#accessing-fleet-in-the-rancher-ui[documentation]. -. Go to *Git Repos -> Add Repository* +The Edge team maintains a ready to use link:https://github.com/suse-edge/fleet-examples/tree/release-3.1.1/fleets/day2/system-upgrade-controller-plans/os-upgrade[fleet] that users can add as a `path` for their GitRepo resource. -If you use the `suse-edge/fleet-examples` repository: - -. *Repository URL* - `https://github.com/suse-edge/fleet-examples.git` - -. *Watch -> Revision* - choose a link:https://github.com/suse-edge/fleet-examples/releases[release] tag for the `suse-edge/fleet-examples` repository that you wish to use - -. Under *Paths* add the path to the OS upgrade Fleets that you wish to use - `fleets/day2/system-upgrade-controller-plans/os-upgrade` +[IMPORTANT] +==== +Always use this fleet from a valid Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag. +==== -. Select *Next* to move to the *target* configuration section. *Only select clusters whose node's packages you wish to upgrade* +For use-cases where no custom tolerations need to be included to the `SUC plans` that the fleet ships, users can directly refer the `os-upgrade` fleet from the `suse-edge/fleet-examples` repository. -. *Create* +In cases where custom tolerations are needed, users should refer the `os-upgrade` fleet from a separate repository, allowing them to add the tolerations to the SUC plans as required. -Alternatively, if you decide to use your own repository to host these files, you would need to provide your repo data above. +An example of how a `GitRepo` can be configured to use the fleet from the `suse-edge/fleet-examples` repository, can be viewed link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/gitrepos/day2/os-upgrade-gitrepo.yaml[here]. [#os-upgrade-suc-plan-deployment-git-repo-manual] ===== GitRepo creation - manual -. Choose the desired Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to apply the OS *SUC update Plans* from (referenced below as `$\{REVISION\}`). - . Pull the *GitRepo* resource: + [,bash] ---- -curl -o os-update-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/gitrepos/day2/os-update-gitrepo.yaml +curl -o os-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/gitrepos/day2/os-upgrade-gitrepo.yaml ---- -. Edit the *GitRepo* configuration, under `spec.targets` specify your desired target list. By default the `GitRepo` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any down stream clusters. +. Edit the *GitRepo* configuration, under `spec.targets` specify your desired target list. By default the `GitRepo` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any downstream clusters. ** To match all clusters change the default `GitRepo` *target* to: + @@ -240,7 +253,7 @@ spec: + [,bash] ---- -kubectl apply -f os-update-gitrepo.yaml +kubectl apply -f os-upgrade-gitrepo.yaml ---- . View the created *GitRepo* resource under the `fleet-default` namespace: @@ -251,7 +264,7 @@ kubectl get gitrepo os-upgrade -n fleet-default # Example output NAME REPO COMMIT BUNDLEDEPLOYMENTS-READY STATUS -os-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.0 0/0 +os-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.1 0/0 ---- [#os-upgrade-suc-plan-deployment-bundle] @@ -268,6 +281,15 @@ Once deployed, to monitor the OS upgrade process of the nodes of your targeted c [#os-upgrade-suc-plan-deployment-bundle-rancher] ===== Bundle creation - Rancher UI +The Edge team maintains a ready to use link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml[bundle] that can be used in the below steps. + +[IMPORTANT] +==== +Always use this bundle from a valid Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag. +==== + +To create a bundle through Rancher's UI: + . In the upper left corner, click *☰ -> Continuous Delivery* . Go to *Advanced* > *Bundles* @@ -275,10 +297,15 @@ Once deployed, to monitor the OS upgrade process of the nodes of your targeted c . Select *Create from YAML* . From here you can create the Bundle in one of the following ways: ++ +[NOTE] +==== +There might be use-cases where you would need to include custom tolerations to the `SUC plans` that the bundle ships. Make sure to include those tolerations in the bundle that will be generated by the below steps. +==== -.. By manually copying the *Bundle* content to the *Create from YAML* page. Content can be retrieved from https://raw.githubusercontent.com/suse-edge/fleet-examples/$\{REVISION\}/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml, where `$\{REVISION\}` is the Edge link:https://github.com/suse-edge/fleet-examples/releases[release] that you are using +.. By manually copying the link:https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml[bundle content] from `suse-edge/fleet-examples` to the *Create from YAML* page. -.. By cloning the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository to the desired link:https://github.com/suse-edge/fleet-examples/releases[release] tag and selecting the *Read from File* option in the *Create from YAML* page. From there, navigate to `bundles/day2/system-upgrade-controller-plans/os-upgrade` directory and select `os-upgrade-bundle.yaml`. This will auto-populate the *Create from YAML* page with the Bundle content. +.. By cloning the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository from the desired link:https://github.com/suse-edge/fleet-examples/releases[release] tag and selecting the *Read from File* option in the *Create from YAML* page. From there, navigate to the bundle location (`bundles/day2/system-upgrade-controller-plans/os-upgrade`) and select the bundle file. This will auto-populate the *Create from YAML* page with the bundle content. . Change the *target* clusters for the `Bundle`: @@ -298,16 +325,14 @@ spec: [#os-upgrade-suc-plan-deployment-bundle-manual] ===== Bundle creation - manual -. Choose the desired Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to apply the OS upgrade *SUC Plans* from (referenced below as `$\{REVISION\}`). - . Pull the *Bundle* resource: + [,bash] ---- -curl -o os-upgrade-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml +curl -o os-upgrade-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml ---- -. Edit the `Bundle` *target* configurations, under `spec.targets` provide your desired target list. By default the `Bundle` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any down stream clusters. +. Edit the `Bundle` *target* configurations, under `spec.targets` provide your desired target list. By default the `Bundle` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any downstream clusters. ** To match all clusters change the default `Bundle` *target* to: + @@ -343,11 +368,13 @@ To get the OS upgrade resources that you need, first determine the Edge link:htt After that, resources can be found at `fleets/day2/system-upgrade-controller-plans/os-upgrade`, where: -* `plan-control-plane.yaml` - `system-upgrade-controller` Plan resource for *control-plane* nodes +* `plan-control-plane.yaml` - `system-upgrade-controller` Plan resource for *control-plane* nodes. + +* `plan-worker.yaml` - `system-upgrade-controller` Plan resource for *worker* nodes. -* `plan-agent.yaml` - `system-upgrade-controller` Plan resource for *agent* nodes +* `secret.yaml` - secret that ships the `upgrade.sh` script. -* `secret.yaml` - secret that ships a script that creates the `os-pkg-update.service/os-migration.service` link:https://www.freedesktop.org/software/systemd/man/latest/systemd.service.html[systemd.service] +* `config-map.yaml` - ConfigMap that provides upgrade configurations that are consumed by the `upgrade.sh` script. [IMPORTANT] ==== diff --git a/asciidoc/day2/downstream-clusters-introduction.adoc b/asciidoc/day2/downstream-clusters-introduction.adoc index 7179ca2e..2df199b8 100644 --- a/asciidoc/day2/downstream-clusters-introduction.adoc +++ b/asciidoc/day2/downstream-clusters-introduction.adoc @@ -23,13 +23,13 @@ This section is meant to be a *starting point* for the `Day 2` operations docume [#day2-downstream-components] === Components -Below you can find a description of the default components that should be setup on either your `management cluster` or your `downstream clusters` so that you can successfully perform `Day 2` operations. +Below you can find a description of the default components that should be set up on either your `management cluster` or your `downstream clusters` so that you can successfully perform `Day 2` operations. ==== Rancher [NOTE] ==== -For use-cases where you want to utilise <> without Rancher, you can skip the Rancher component all together. +For use-cases where you want to utilize <> without Rancher, you can skip the Rancher component altogether. ==== Responsible for the management of your `downstream clusters`. Should be deployed on your `management cluster`. @@ -56,7 +56,9 @@ For use-cases, where a third party GitOps tool usage is desired, see: . For `Kubernetes distribution upgrades` - <> -. For `Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <> page and populate the chart version and URL in your third party GitOps tool +. For `EIB deployed Helm chart upgrades` - <> + +. For `non-EIB deployed Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <> page and populate the chart version and URL in your third party GitOps tool ==== ==== System Upgrade Controller (SUC) @@ -83,7 +85,7 @@ Below you can find more information regarding what these resources do and for wh A `GitRepo` is a <> resource that represents a Git repository from which `Fleet` can create `Bundles`. Each `Bundle` is created based on configuration paths defined inside of the `GitRepo` resource. For more information, see the https://fleet.rancher.io/gitrepo-add[GitRepo] documentation. -In terms of `Day 2` operations `GitRepo` resources are normally used to deploy `SUC` or `SUC Plans` on *non air-gapped* environments that utilise a _Fleet GitOps_ approach. +In terms of `Day 2` operations, `GitRepo` resources are normally used to deploy `SUC` or `SUC Plans` on *non air-gapped* environments that utilize a _Fleet GitOps_ approach. Alternatively, `GitRepo` resources can also be used to deploy `SUC` or `SUC Plans` on *air-gapped* environments, *if you mirror your repository setup through a local git server*. @@ -91,7 +93,7 @@ Alternatively, `GitRepo` resources can also be used to deploy `SUC` or `SUC Plan `Bundles` hold *raw* Kubernetes resources that will be deployed on the targeted cluster. Usually they are created from a `GitRepo` resource, but there are use-cases where they can be deployed manually. For more information refer to the https://fleet.rancher.io/bundle-add[Bundle] documentation. -In terms of `Day 2` operations `Bundle` resources are normally used to deploy `SUC` or `SUC Plans` on *air-gapped* environments that do not use some form of _local GitOps_ procedure (e.g. a *local git server*). +In terms of `Day 2` operations, `Bundle` resources are normally used to deploy `SUC` or `SUC Plans` on *air-gapped* environments that do not use some form of _local GitOps_ procedure (e.g. a *local git server*). Alternatively, if your use-case does not allow for a _GitOps_ workflow (e.g. using a Git repository), *Bundle* resources could also be used to deploy `SUC` or `SUC Plans` on *non air-gapped* environments.