From e1a5547268a536991ead0a88d90e047180344825 Mon Sep 17 00:00:00 2001 From: Ivo Petrov Date: Thu, 24 Oct 2024 17:41:57 +0300 Subject: [PATCH 1/8] Minor ATIP doc fix (#479) (cherry picked from commit 7600d1f58e0a4e2d22a67e54012715b0e69c8644) --- asciidoc/product/atip-automated-provision.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/asciidoc/product/atip-automated-provision.adoc b/asciidoc/product/atip-automated-provision.adoc index b8030af2..69a6ce32 100644 --- a/asciidoc/product/atip-automated-provision.adoc +++ b/asciidoc/product/atip-automated-provision.adoc @@ -811,7 +811,7 @@ The `Metal3DataTemplate` object specifies the `metaData` for the downstream clus apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3DataTemplate metadata: - name: multinode-node-cluster-controlplane-template + name: single-node-cluster-controlplane-template namespace: default spec: clusterName: single-node-cluster From 3643b3f139457f46f34b158120ab0598af3a976f Mon Sep 17 00:00:00 2001 From: Ivo Petrov Date: Wed, 13 Nov 2024 09:33:14 +0200 Subject: [PATCH 2/8] "Day 2" related documentation changes and improvements for 3.1.1 (#483) * Documentation improvements * Naming standartization changes * Add important notes * Fix spelling mistakes * Bump helm day2 doc with correct release tag (cherry picked from commit bcafd4b32d044b4b17f2e65870d638c14e27b194) --- asciidoc/day2/downstream-cluster-helm.adoc | 126 +++++++++++------- asciidoc/day2/downstream-cluster-k8s.adoc | 107 +++++++++------ asciidoc/day2/downstream-cluster-os.adoc | 103 ++++++++------ .../downstream-clusters-introduction.adoc | 12 +- 4 files changed, 217 insertions(+), 131 deletions(-) diff --git a/asciidoc/day2/downstream-cluster-helm.adoc b/asciidoc/day2/downstream-cluster-helm.adoc index 28a41658..d9fad260 100644 --- a/asciidoc/day2/downstream-cluster-helm.adoc +++ b/asciidoc/day2/downstream-cluster-helm.adoc @@ -16,7 +16,11 @@ endif::[] ==== The below sections focus on using `Fleet` functionalities to achieve a Helm chart update. -Users adopting a third-party GitOps workflow, should take the configurations for their desired helm chart from its `fleet.yaml` located at `fleets/day2/chart-templates/`. *Make sure you are retrieving the chart data from a valid "Day 2" Edge link:https://github.com/suse-edge/fleet-examples/releases[release].* +For use-cases, where a third party GitOps tool usage is desired, see: + +* For `EIB deployed Helm chart upgrades` - <>. + +* For `non-EIB deployed Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <> page and populate the chart version and URL in your third party GitOps tool. ==== === Components @@ -31,13 +35,13 @@ Depending on what your environment supports, you can take one of the following o . Host your chart's Fleet resources on a local Git server that is accessible by your `management cluster`. -. Use Fleet's CLI to link:https://fleet.rancher.io/bundle-add#convert-a-helm-chart-into-a-bundle[convert a Helm chart into a Bundle] that you can directly use and will not need to host somewhere. Fleet's CLI can be retrieved from their link:https://github.com/rancher/fleet/releases[release] page, for Mac users there is a link:https://formulae.brew.sh/formula/fleet-cli[fleet-cli] Homebrew Formulae. +. Use Fleet's CLI to link:https://fleet.rancher.io/bundle-add#convert-a-helm-chart-into-a-bundle[convert a Helm chart into a Bundle] that you can directly use and will not need to be hosted somewhere. Fleet's CLI can be retrieved from their link:https://github.com/rancher/fleet/releases[release] page, for Mac users there is a link:https://formulae.brew.sh/formula/fleet-cli[fleet-cli] Homebrew Formulae. ==== Find the required assets for your Edge release version . Go to the Day 2 link:https://github.com/suse-edge/fleet-examples/releases[release] page and find the Edge 3.X.Y release that you want to upgrade your chart to and click *Assets*. -. From the release's *Assets* section, download the following files, which are required for an air-gapped upgrade of a SUSE supported helm chart: +. From the *"Assets"* section, download the following files: + [cols="1,1"] |====== @@ -45,25 +49,25 @@ Depending on what your environment supports, you can take one of the following o |*Description* |_edge-save-images.sh_ -|This script pulls the images in the `edge-release-images.txt` file and saves them to a '.tar.gz' archive that can then be used in your air-gapped environment. +|Pulls the images specified in the `edge-release-images.txt` file and packages them inside of a '.tar.gz' archive. |_edge-save-oci-artefacts.sh_ -|This script pulls the SUSE OCI chart artefacts in the `edge-release-helm-oci-artefacts.txt` file and creates a '.tar.gz' archive of a directory containing all other chart OCI archives. +|Pulls the OCI chart images related to the specific Edge release and packages them inside of a '.tar.gz' archive. |_edge-load-images.sh_ -|This script loads the images in the '.tar.gz' archive generated by `edge-save-images.sh`, retags them and pushes them to your private registry. +|Loads images from a '.tar.gz' archive, retags and pushes them to a private registry. |_edge-load-oci-artefacts.sh_ -|This script takes a directory containing '.tgz' SUSE OCI charts and loads all OCI charts to your private registry. The directory is retrieved from the '.tar.gz' archive that the `edge-save-oci-artefacts.sh` script has generated. +|Takes a directory containing Edge OCI '.tgz' chart packages and loads them to a private registry. |_edge-release-helm-oci-artefacts.txt_ -|This file contains a list of OCI artefacts for the SUSE Edge release Helm charts. +|Contains a list of OCI chart images related to a specific Edge release. |_edge-release-images.txt_ -|This file contains a list of images needed by the Edge release Helm charts. +|Contains a list of images related to a specific Edge release. |====== -==== Create the SUSE Edge release images archive +==== Create the Edge release images archive _On a machine with internet access:_ @@ -74,23 +78,28 @@ _On a machine with internet access:_ chmod +x edge-save-images.sh ---- -. Use `edge-save-images.sh` script to create a _Docker_ importable '.tar.gz' archive: +. Generate the image archive: + [,bash] ---- ./edge-save-images.sh --source-registry registry.suse.com ---- -. This will create a ready to load `edge-images.tar.gz` (unless you have specified the `-i|--images` option) archive with the needed images. +. This will create a ready to load archive named `edge-images.tar.gz`. ++ +[NOTE] +==== +If the `-i|--images` option is specified, the name of the archive may differ. +==== -. Copy this archive to your *air-gapped* machine +. Copy this archive to your *air-gapped* machine: + [,bash] ---- scp edge-images.tar.gz @:/path ---- -==== Create a SUSE Edge Helm chart OCI images archive +==== Create the Edge OCI chart images archive _On a machine with internet access:_ @@ -101,23 +110,28 @@ _On a machine with internet access:_ chmod +x edge-save-oci-artefacts.sh ---- -. Use `edge-save-oci-artefacts.sh` script to create a '.tar.gz' archive of all SUSE Edge Helm chart OCI images: +. Generate the OCI chart image archive: + [,bash] ---- ./edge-save-oci-artefacts.sh --source-registry registry.suse.com ---- -. This will create a `oci-artefacts.tar.gz` archive containing all SUSE Edge Helm chart OCI images +. This will create an archive named `oci-artefacts.tar.gz`. ++ +[NOTE] +==== +If the `-a|--archive` option is specified, the name of the archive may differ. +==== -. Copy this archive to your *air-gapped* machine +. Copy this archive to your *air-gapped* machine: + [,bash] ---- scp oci-artefacts.tar.gz @:/path ---- -==== Load SUSE Edge release images to your air-gapped machine +==== Load Edge release images to your air-gapped machine _On your air-gapped machine:_ @@ -135,14 +149,19 @@ podman login chmod +x edge-load-images.sh ---- -. Use `edge-load-images.sh` to load the images from the *copied* `edge-images.tar.gz` archive, retag them and push them to your private registry: +. Execute the script, passing the previously *copied* `edge-images.tar.gz` archive: + [,bash] ---- ./edge-load-images.sh --source-registry registry.suse.com --registry --images edge-images.tar.gz ---- ++ +[NOTE] +==== +This will load all images from the `edge-images.tar.gz`, retag and push them to the registry specified under the `--registry` option. +==== -==== Load SUSE Edge Helm chart OCI images to your air-gapped machine +==== Load the Edge OCI chart images to your air-gapped machine _On your air-gapped machine:_ @@ -169,7 +188,7 @@ tar -xvf oci-artefacts.tar.gz . This will produce a directory with the naming template `edge-release-oci-tgz-` -. Pass this directory to the `edge-load-oci-artefacts.sh` script to load the SUSE Edge helm chart OCI images to your private registry: +. Pass this directory to the `edge-load-oci-artefacts.sh` script to load the Edge OCI chart images to your private registry: + [NOTE] ==== @@ -189,11 +208,6 @@ For K3s, see link:https://docs.k3s.io/installation/registry-mirror[Embedded Regi === Upgrade procedure -[NOTE] -==== -The below upgrade procedure utilises Rancher's <> funtionality. Users using a third-party GitOps workflow should retrieve the chart versions supported by each Edge release from the <> and populate these versions to their third-party GitOps workflow. -==== - This section focuses on the following Helm upgrade procedure use-cases: . <> @@ -212,6 +226,15 @@ Manually deployed Helm charts cannot be reliably upgraded. We suggest to redeplo For users that want to manage their Helm chart lifecycle through Fleet. +This section covers how to: + +. <>. + +. <>. + +. <>. + +[#day2-helm-upgrade-new-cluster-prepare-fleet] ===== Prepare your Fleet resources . Acquire the Chart's Fleet resources from the Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to use @@ -220,13 +243,13 @@ For users that want to manage their Helm chart lifecycle through Fleet. .. *If you intend to use a GitOps workflow*, copy the chart Fleet directory to the Git repository from where you will do GitOps. -.. *Optionally*, if the Helm chart requires configurations to its *values*, edit the `.helm.values` configuration inside the `fleet.yaml` file of the copied directory +.. *Optionally*, if the Helm chart requires configurations to its *values*, edit the `.helm.values` configuration inside the `fleet.yaml` file of the copied directory. -.. *Optionally*, there may be use-cases where you need to add additional resources to your chart's fleet so that it can better fit your environment. For information on how to enhance your Fleet directory, see link:https://fleet.rancher.io/gitrepo-content[Git Repository Contents] +.. *Optionally*, there may be use-cases where you need to add additional resources to your chart's fleet so that it can better fit your environment. For information on how to enhance your Fleet directory, see link:https://fleet.rancher.io/gitrepo-content[Git Repository Contents]. An *example* for the `longhorn` helm chart would look like: -* User Git repository strucutre: +* User Git repository structure: + [,bash] ---- @@ -286,18 +309,20 @@ diff: These are just example values that are used to illustrate custom configurations over the `longhorn` chart. They should *NOT* be treated as deployment guidelines for the `longhorn` chart. ==== +[#day2-helm-upgrade-new-cluster-deploy-fleet] ===== Deploy your Fleet -If the environment supports working with a GitOps workflow, you can deploy your Chart Fleet by either using a link:https://fleet.rancher.io/ref-gitrepo[GitRepo] or link:https://fleet.rancher.io/bundle-add[Bundle]. +If the environment supports working with a GitOps workflow, you can deploy your Chart Fleet by either using a <> or <>. [NOTE] ==== While deploying your Fleet, if you get a `Modified` message, make sure to add a corresponding `comparePatches` entry to the Fleet's `diff` section. For more information, see link:https://fleet.rancher.io/bundle-diffs[Generating Diffs to Ignore Modified GitRepos]. ==== +[#day2-helm-upgrade-new-cluster-deploy-fleet-gitrepo] ====== GitRepo -Fleet's GitRepo resource holds information on how to access your chart's Fleet resources and to which clusters it needs to apply those resources. +Fleet's link:https://fleet.rancher.io/ref-gitrepo[GitRepo] resource holds information on how to access your chart's Fleet resources and to which clusters it needs to apply those resources. The `GitRepo` resource can be deployed through the link:https://ranchermanager.docs.rancher.com/v2.8/integrations-in-rancher/fleet/overview#accessing-fleet-in-the-rancher-ui[Rancher UI], or manually, by link:https://fleet.rancher.io/tut-deployment[deploying] the resource to the `management cluster`. @@ -326,9 +351,10 @@ spec: - clusterSelector: {} ---- +[#day2-helm-upgrade-new-cluster-deploy-fleet-bundle] ====== Bundle -`Bundle` resources hold the raw Kubernetes resources that need to be deployed by Fleet. Normally it is encouraged to use the `GitRepo` approach, but for use-cases where the environment is air-gapped and cannot support a local Git server, `Bundles` can help you in propagating your Helm chart Fleet to your target clusters. +link:https://fleet.rancher.io/bundle-add[Bundle] resources hold the raw Kubernetes resources that need to be deployed by Fleet. Normally it is encouraged to use the `GitRepo` approach, but for use-cases where the environment is air-gapped and cannot support a local Git server, `Bundles` can help you in propagating your Helm chart Fleet to your target clusters. The `Bundle` can be deployed either through the Rancher UI (`Continuous Delivery -> Advanced -> Bundles -> Create from YAML`) or by manually deploying the `Bundle` resource in the correct Fleet namespace. For information about Fleet namespaces, see the upstream link:https://fleet.rancher.io/namespaces#gitrepos-bundles-clusters-clustergroups[documentation]. @@ -392,6 +418,7 @@ kubectl apply -f longhorn-bundle.yaml Following these steps will ensure that `Longhorn` is deployed on all of the specified target clusters. +[#day2-helm-upgrade-new-cluster-manage-chart] ===== Managing the deployed Helm chart Once deployed with Fleet, for Helm chart upgrades, see <>. @@ -399,18 +426,18 @@ Once deployed with Fleet, for Helm chart upgrades, see <>. +. Determine the version to which you need to upgrade your chart so that it is compatible with the desired Edge release. Helm chart version per Edge release can be viewed from the <>. -. In your Fleet monitored Git repository, edit the Helm chart's `fleet.yaml` file with the correct chart *version* and *repository* from the <>. +. In your Fleet monitored Git repository, edit the Helm chart's `fleet.yaml` file with the correct chart *version* and *repository* from the <>. -. After commiting and pushing the changes to your repository, this will trigger an upgrade of the desired Helm chart +. After committing and pushing the changes to your repository, this will trigger an upgrade of the desired Helm chart [#day2-helm-upgrade-eib-chart] ==== I would like to upgrade an EIB deployed Helm chart -EIB deploys Helm charts by creating a `HelmChart` resource and utilising the `helm-controller` introduced by the link:https://docs.rke2.io/helm[RKE2]/link:https://docs.k3s.io/helm[K3s] Helm integration feature. +EIB deploys Helm charts by creating a `HelmChart` resource and utilizing the `helm-controller` introduced by the link:https://docs.rke2.io/helm[RKE2]/link:https://docs.k3s.io/helm[K3s] Helm integration feature. -To ensure that an EIB deployed Helm chart is successfully upgraded, users would need to do an upgrade over the `HelmChart` resources created for the Helm chart by EIB. +To ensure that an EIB deployed Helm chart is successfully upgraded, users need to do an upgrade over the `HelmChart` resources created for the Helm chart by EIB. Below you can find information on: @@ -442,7 +469,7 @@ image::day2_helm_chart_upgrade_diagram.png[] . <> detects the deployed resource, parses its data and deploys its resources to the specified target clusters. The most notable resources that are deployed are: -.. `eib-charts-upgrader` - a Job that deployes the `Chart Upgrade Pod`. The `eib-charts-upgrader-script` as well as all `helm chart upgrade data` secrets are mounted inside of the `Chart Upgrade Pod`. +.. `eib-charts-upgrader` - a Job that deploys the `Chart Upgrade Pod`. The `eib-charts-upgrader-script` as well as all `helm chart upgrade data` secrets are mounted inside of the `Chart Upgrade Pod`. .. `eib-charts-upgrader-script` - a Secret shipping the script that will be used by the `Chart Upgrade Pod` to patch an existing `HelmChart` resource. @@ -461,7 +488,7 @@ image::day2_helm_chart_upgrade_diagram.png[] [#day2-helm-upgrade-eib-chart-upgrade-steps] ===== Upgrade Steps -. Clone the link:https://github.com/suse-edge/fleet-examples[suse-edge/fleet-examples] repository from the Edge link:https://github.com/suse-edge/fleet-examples/releases[relase tag] that you wish to use. +. Clone the link:https://github.com/suse-edge/fleet-examples[suse-edge/fleet-examples] repository from the Edge link:https://github.com/suse-edge/fleet-examples/releases[release tag] that you wish to use. . Create a directory in which you will store the pulled Helm chart archive(s). + @@ -481,7 +508,7 @@ helm pull [chart URL | repo/chartname] # helm pull [chart URL | repo/chartname] --version 0.0.0 ---- -. From the desired link:https://github.com/suse-edge/fleet-examples/releases[relase tag] download the `generate-chart-upgrade-data.sh` script +. From the desired link:https://github.com/suse-edge/fleet-examples/releases[release tag] download the `generate-chart-upgrade-data.sh` script. . Execute the `generate-chart-upgrade-data.sh` script: + @@ -511,7 +538,7 @@ The script will go through the following logic: ... The `base64` encoded Helm chart archive that will be used to replace the `HelmChart's` currently running configuration. -.. Each `Kubernetes Secret YAML` resource will be transferted to the `base/secrets` directory inside of the path to the `eib-charts-upgrader` Fleet that was given under `--fleet-path`. +.. Each `Kubernetes Secret YAML` resource will be transferred to the `base/secrets` directory inside of the path to the `eib-charts-upgrader` Fleet that was given under `--fleet-path`. .. Furthermore the `generate-chart-upgrade-data.sh` script ensures that the secrets that it moved will be picked up and used in the Helm chart upgrade logic. It does that by: @@ -567,11 +594,11 @@ For information on how to map target clusters, see the upstream link:https://fle fleet apply --compress --targets-file=targets.yaml -n fleet-default -o - eib-charts-upgrade > bundle.yaml ---- + -This will create a Bundle (`bundle.yaml`) that will hold all the templated resoruce from the `eib-charts-upgrader` Fleet. +This will create a Bundle (`bundle.yaml`) that will hold all the templated resource from the `eib-charts-upgrader` Fleet. + For more information regarding the `fleet apply` command, see link:https://fleet.rancher.io/cli/fleet-cli/fleet_apply[fleet apply]. + -For more information regaring converting Fleets to Bundles, see link:https://fleet.rancher.io/bundle-add#convert-a-helm-chart-into-a-bundle[Convert a Helm Chart into a Bundle]. +For more information regarding converting Fleets to Bundles, see link:https://fleet.rancher.io/bundle-add#convert-a-helm-chart-into-a-bundle[Convert a Helm Chart into a Bundle]. .. Deploy the `Bundle`. This can be done in one of two ways: @@ -583,6 +610,13 @@ Executing these steps will result in a successfully deployed `GitRepo/Bundle` re For information on how to track the upgrade process, you can refer to the <> section of this documentation. +[IMPORTANT] +==== +Once the chart upgrade has been successfully verified, remove the `Bundle/GitRepo` resource. + +This will remove the no longer necessary upgrade resources from your downstream cluster, ensuring that no future version clashes might occur. +==== + [#day2-helm-upgrade-eib-chart-example] ===== Example @@ -627,11 +661,11 @@ image::day2_helm_chart_upgrade_example_1.png[] Follow the <>: -. Clone the `suse-edge/fleet-example` repository from the `release-3.1.0` tag. +. Clone the `suse-edge/fleet-example` repository from the `release-3.1.1` tag. + [,bash] ---- -git clone -b release-3.1.0 https://github.com/suse-edge/fleet-examples.git +git clone -b release-3.1.1 https://github.com/suse-edge/fleet-examples.git ---- . Create a directory where the `Longhorn` upgrade archive will be stored. @@ -655,7 +689,7 @@ helm pull rancher-charts/longhorn-crd --version 104.2.0+up1.7.1 helm pull rancher-charts/longhorn --version 104.2.0+up1.7.1 ---- -. Outside of the `archives` directory, download the `generate-chart-upgrade-data.sh` script from the `release-3.1.0` release tag. +. Outside of the `archives` directory, download the `generate-chart-upgrade-data.sh` script from the `release-3.1.1` release tag. . Directory setup should look similar to: + diff --git a/asciidoc/day2/downstream-cluster-k8s.adoc b/asciidoc/day2/downstream-cluster-k8s.adoc index 5cde0b3b..c323fbb7 100644 --- a/asciidoc/day2/downstream-cluster-k8s.adoc +++ b/asciidoc/day2/downstream-cluster-k8s.adoc @@ -68,7 +68,7 @@ An example of defining custom tolerations for the RKE2 *control-plane* SUC Plan, apiVersion: upgrade.cattle.io/v1 kind: Plan metadata: - name: rke2-plan-control-plane + name: rke2-upgrade-control-plane spec: ... tolerations: @@ -99,6 +99,24 @@ spec: This section assumes you will be deploying *SUC Plans* using <>. If you intend to deploy the *SUC Plan* using a different approach, refer to <>. ==== +[IMPORTANT] +==== +For environments previously upgraded using this procedure, users should ensure that *one* of the following steps is completed: + +* `Remove any previously deployed SUC Plans related to older Edge release versions from the downstream cluster` - can be done by removing the desired _downstream_ cluster from the existing `GitRepo/Bundle` target configuration, or removing the `GitRepo/Bundle` resource altogether. + +* `Reuse the existing GitRepo/Bundle resource` - can be done by pointing the resource's revision to a new tag that holds the correct fleets for the desired `suse-edge/fleet-examples` link:https://github.com/suse-edge/fleet-examples/releases[release]. + +This is done in order to avoid clashes between `SUC Plans` for older Edge release versions. + +If users attempt to upgrade, while there are existing `SUC Plans` on the _downstream_ cluster, they will see the following fleet error: + +[,bash] +---- +Not installed: Unable to continue with install: Plan in namespace exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error.. +---- +==== + The `Kubernetes version upgrade procedure` revolves around deploying *SUC Plans* to downstream clusters. These plans hold information that instructs the *SUC* on which nodes to create Pods which run the `rke2/k3s-upgrade` images. For information regarding the structure of a *SUC Plan*, refer to the https://github.com/rancher/system-upgrade-controller?tab=readme-ov-file#example-plans[upstream] documentation. `Kubernetes upgrade` Plans are shipped in the following ways: @@ -114,7 +132,7 @@ For a full overview of what happens during the _update procedure_, refer to the [#k8s-version-upgrade-overview] ==== Overview -This section aims to describe the full workflow that the *_Kubernetes version upgrade process_* goes throught from start to finish. +This section aims to describe the full workflow that the *_Kubernetes version upgrade process_* goes through from start to finish. .Kubernetes version upgrade workflow image::day2_k8s_version_upgrade_diagram.png[] @@ -127,17 +145,17 @@ Kubernetes version upgrade steps: .. For *GitRepo/Bundle* configuration options, refer to <> or <>. -. The user deploys the configured *GitRepo/Bundle* resource to the `fleet-default` namespace in his `management cluster`. This is done either *manually* or thorugh the *Rancher UI* if such is available. +. The user deploys the configured *GitRepo/Bundle* resource to the `fleet-default` namespace in his `management cluster`. This is done either *manually* or through the *Rancher UI* if such is available. . <> constantly monitors the `fleet-default` namespace and immediately detects the newly deployed *GitRepo/Bundle* resource. For more information regarding what namespaces does Fleet monitor, refer to Fleet's https://fleet.rancher.io/namespaces[Namespaces] documentation. . If the user has deployed a *GitRepo* resource, `Fleet` will reconcile the *GitRepo* and based on its *paths* and *fleet.yaml* configurations it will deploy a *Bundle* resource in the `fleet-default` namespace. For more information, refer to Fleet's https://fleet.rancher.io/gitrepo-content[GitRepo Contents] documentation. -. `Fleet` then proceeds to deploy the `Kubernetes resources` from this *Bundle* to all the targeted `downstream clusters`. In the context of the `Kubernetes version upgrade`, Fleet deploys the following resources from the *Bundle* (depending on the Kubernetes distrubution): +. `Fleet` then proceeds to deploy the `Kubernetes resources` from this *Bundle* to all the targeted `downstream clusters`. In the context of the `Kubernetes version upgrade`, Fleet deploys the following resources from the *Bundle* (depending on the Kubernetes distribution): -.. `rke2-plan-agent`/`k3s-plan-agent` - instructs *SUC* on how to do a Kubernetes upgrade on cluster *_agent_* nodes. Will *not* be interpreted if the cluster consists only from _control-plane_ nodes. +.. `rke2-upgrade-worker`/`k3s-upgrade-worker` - instructs *SUC* on how to do a Kubernetes upgrade on cluster *_worker_* nodes. Will *not* be interpreted if the cluster consists only from _control-plane_ nodes. -.. `rke2-plan-control-plane`/`k3s-plan-control-plane` - instructs *SUC* on how to do a Kubernetes upgrade on cluster *_control-plane_* nodes. +.. `rke2-upgrade-control-plane`/`k3s-upgrade-control-plane` - instructs *SUC* on how to do a Kubernetes upgrade on cluster *_control-plane_* nodes. + [NOTE] ==== @@ -154,7 +172,7 @@ The above *SUC Plans* will be deployed in the `cattle-system` namespace of each .. Kill the `rke2/k3s` process that is running on the node OS - this instructs the *supervisor* to automatically restart the `rke2/k3s` process using the new version. -.. https://kubernetes.io/docs/reference/kubectl/generated/kubectl_uncordon/[Uncordon] cluster node - after the successful Kubernetes distribution upgrade, the node is again marked as `scheduable`. +.. https://kubernetes.io/docs/reference/kubectl/generated/kubectl_uncordon/[Uncordon] cluster node - after the successful Kubernetes distribution upgrade, the node is again marked as `schedulable`. + [NOTE] ==== @@ -166,6 +184,8 @@ With the above steps executed, the Kubernetes version of each cluster node shoul [#k8s-upgrade-suc-plan-deployment] === Kubernetes version upgrade - SUC Plan deployment +This section describes how to orchestrate the deployment of *SUC Plans* related Kubernetes upgrades using Fleet's *GitRepo* and *Bundle* resources. + [#k8s-upgrade-suc-plan-deployment-git-repo] ==== SUC Plan deployment - GitRepo resource @@ -180,50 +200,45 @@ Once deployed, to monitor the Kubernetes upgrade process of the nodes of your ta [#k8s-upgrade-suc-plan-deployment-git-repo-rancher] ===== GitRepo creation - Rancher UI -. In the upper left corner, *☰ -> Continuous Delivery* +To create a `GitRepo` resource through the Rancher UI, follow their official link:https://ranchermanager.docs.rancher.com/integrations-in-rancher/fleet/overview#accessing-fleet-in-the-rancher-ui[documentation]. -. Go to *Git Repos -> Add Repository* +The Edge team maintains ready to use fleets for both link:https://github.com/suse-edge/fleet-examples/tree/release-3.1.1/fleets/day2/system-upgrade-controller-plans/rke2-upgrade[rke2] and link:https://github.com/suse-edge/fleet-examples/tree/release-3.1.1/fleets/day2/system-upgrade-controller-plans/k3s-upgrade[k3s] Kubernetes distributions, that users can add as a `path` for their GitRepo resource. -If you use the `suse-edge/fleet-examples` repository: - -. *Repository URL* - `https://github.com/suse-edge/fleet-examples.git` - -. *Watch -> Revision* - choose a link:https://github.com/suse-edge/fleet-examples/releases[release] tag for the `suse-edge/fleet-examples` repository that you wish to use - -. Under *Paths* add the path to the Kubernetes distribution upgrade Fleets as seen in the release tag: +[IMPORTANT] +==== +Always use this fleets from a valid Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag. +==== -.. For RKE2 - `fleets/day2/system-upgrade-controller-plans/rke2-upgrade` +For use-cases where no custom tolerations need to be included to the `SUC plans` that these fleets ship, users can directly refer the fleets from the `suse-edge/fleet-examples` repository. -.. For K3s - `fleets/day2/system-upgrade-controller-plans/k3s-upgrade` +In cases where custom tolerations are needed, users should refer the fleets from a separate repository, allowing them to add the tolerations to the SUC plans as required. -. Select *Next* to move to the *target* configuration section. *Only select clusters for which you wish to upgrade the desired Kubernetes distribution* +Configuration examples for a `GitRepo` resource using the fleets from `suse-edge/fleet-examples` repository: -. *Create* +* link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/gitrepos/day2/rke2-upgrade-gitrepo.yaml[RKE2] -Alternatively, if you decide to use your own repository to host these files, you would need to provide your repo data above. +* link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/gitrepos/day2/k3s-upgrade-gitrepo.yaml[K3s] [#k8s-upgrade-suc-plan-deployment-git-repo-manual] ===== GitRepo creation - manual -. Choose the desired Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to apply the Kubernetes *SUC upgrade Plans* from (referenced below as `$\{REVISION\}`). - . Pull the *GitRepo* resource: ** For *RKE2* clusters: + [,bash] ---- -curl -o rke2-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/gitrepos/day2/rke2-upgrade-gitrepo.yaml +curl -o rke2-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/gitrepos/day2/rke2-upgrade-gitrepo.yaml ---- ** For *K3s* clusters: + [,bash] ---- -curl -o k3s-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/gitrepos/day2/k3s-upgrade-gitrepo.yaml +curl -o k3s-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/gitrepos/day2/k3s-upgrade-gitrepo.yaml ---- -. Edit the *GitRepo* configuration, under `spec.targets` specify your desired target list. By default the `GitRepo` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any down stream clusters. +. Edit the *GitRepo* configuration, under `spec.targets` specify your desired target list. By default the `GitRepo` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any downstream clusters. ** To match all clusters change the default `GitRepo` *target* to: + @@ -260,8 +275,8 @@ kubectl get gitrepo k3s-upgrade -n fleet-default # Example output NAME REPO COMMIT BUNDLEDEPLOYMENTS-READY STATUS -k3s-upgrade https://github.com/suse-edge/fleet-examples.git release-3.0.1 0/0 -rke2-upgrade https://github.com/suse-edge/fleet-examples.git release-3.0.1 0/0 +k3s-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.1 0/0 +rke2-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.1 0/0 ---- [#k8s-upgrade-suc-plan-deployment-bundle] @@ -278,6 +293,15 @@ Once deployed, to monitor the Kubernetes upgrade process of the nodes of your ta [#k8s-upgrade-suc-plan-deployment-bundle-rancher] ===== Bundle creation - Rancher UI +The Edge team maintains ready to use bundles for both link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml[rke2] and link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml[k3s] Kubernetes distributions that can be used in the below steps. + +[IMPORTANT] +==== +Always use this bundle from a valid Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag. +==== + +To create a bundle through Rancher's UI: + . In the upper left corner, click *☰ -> Continuous Delivery* . Go to *Advanced* > *Bundles* @@ -285,14 +309,15 @@ Once deployed, to monitor the Kubernetes upgrade process of the nodes of your ta . Select *Create from YAML* . From here you can create the Bundle in one of the following ways: ++ +[NOTE] +==== +There might be use-cases where you would need to include custom tolerations to the `SUC plans` that the bundle ships. Make sure to include those tolerations in the bundle that will be generated by the below steps. +==== -.. By manually copying the *Bundle* content to the *Create from YAML* page. Content can be retrieved: - -... For RKE2 - https://raw.githubusercontent.com/suse-edge/fleet-examples/$\{REVISION\}/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml - -... For K3s - https://raw.githubusercontent.com/suse-edge/fleet-examples/$\{REVISION\}/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml +.. By manually copying the bundle content for link:https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml[RKE2] or link:https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml[K3s] from `suse-edge/fleet-examples` to the *Create from YAML* page. -.. By cloning the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository to the desired link:https://github.com/suse-edge/fleet-examples/releases[release] tag and selecting the *Read from File* option in the *Create from YAML* page. From there, navigate to the bundle that you need (`/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml` for RKE2 and `/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml` for K3s). This will auto-populate the *Create from YAML* page with the Bundle content +.. By cloning the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository from the desired link:https://github.com/suse-edge/fleet-examples/releases[release] tag and selecting the *Read from File* option in the *Create from YAML* page. From there, navigate to the bundle that you need (`bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml` for RKE2 and `bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml` for K3s). This will auto-populate the *Create from YAML* page with the bundle content. . Change the *target* clusters for the `Bundle`: @@ -312,25 +337,23 @@ spec: [#k8s-upgrade-suc-plan-deployment-bundle-manual] ===== Bundle creation - manual -. Choose the desired Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to apply the Kubernetes *SUC upgrade Plans* from (referenced below as `$\{REVISION\}`). - . Pull the *Bundle* resources: ** For *RKE2* clusters: + [,bash] ---- -curl -o rke2-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml +curl -o rke2-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/rke2-upgrade/plan-bundle.yaml ---- ** For *K3s* clusters: + [,bash] ---- -curl -o k3s-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml +curl -o k3s-plan-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/k3s-upgrade/plan-bundle.yaml ---- -. Edit the `Bundle` *target* configurations, under `spec.targets` provide your desired target list. By default the `Bundle` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any down stream clusters. +. Edit the `Bundle` *target* configurations, under `spec.targets` provide your desired target list. By default the `Bundle` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any downstream clusters. ** To match all clusters change the default `Bundle` *target* to: + @@ -376,7 +399,7 @@ rke2-upgrade 0/0 There might be use-cases where users would like to incorporate the Kubernetes upgrade resources to their own third-party GitOps workflow (e.g. `Flux`). -To get the upgrade resources that you need, first determine the he Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag of the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository that you would like to use. +To get the upgrade resources that you need, first determine the Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag of the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository that you would like to use. After that, the resources can be found at: @@ -384,13 +407,13 @@ After that, the resources can be found at: ** For `control-plane` nodes - `fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-control-plane.yaml` -** For `agent` nodes - `fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-agent.yaml` +** For `worker` nodes - `fleets/day2/system-upgrade-controller-plans/rke2-upgrade/plan-worker.yaml` * For a K3s cluster upgrade: ** For `control-plane` nodes - `fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-control-plane.yaml` -** For `agent` nodes - `fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-agent.yaml` +** For `worker` nodes - `fleets/day2/system-upgrade-controller-plans/k3s-upgrade/plan-worker.yaml` [IMPORTANT] ==== diff --git a/asciidoc/day2/downstream-cluster-os.adoc b/asciidoc/day2/downstream-cluster-os.adoc index d96b6c2b..214e9ac7 100644 --- a/asciidoc/day2/downstream-cluster-os.adoc +++ b/asciidoc/day2/downstream-cluster-os.adoc @@ -26,9 +26,9 @@ A different link:https://www.freedesktop.org/software/systemd/man/latest/systemd ** First a link:https://en.opensuse.org/SDB:Zypper_usage#Updating_packages[normal package upgrade]. Done in order to ensure that all packages are with the latest version before the migration. Mitigating any failures related to old package version. -** After that it proceeds with the OS migration process by utilising the `zypper migration` command. +** After that it proceeds with the OS migration process by utilizing the `zypper migration` command. -Shipped through a *SUC plan*, which should be located on each *downstream cluster* that is in need of a OS upgrade. +Shipped through a *SUC plan*, which should be located on each *downstream cluster* that is in need of an OS upgrade. === Requirements @@ -59,7 +59,7 @@ An example of defining custom tolerations for the *control-plane* SUC Plan, woul apiVersion: upgrade.cattle.io/v1 kind: Plan metadata: - name: cp-os-upgrade-edge-3XX + name: os-upgrade-control-plane spec: ... tolerations: @@ -85,7 +85,7 @@ spec: _Air-gapped:_ -. *Mirror SUSE RPM repositories* - OS RPM repositories should be locally mirrored so that `os-pkg-update.service/os-migration.service` can have access to them. This can be achieved using link:https://github.com/SUSE/rmt[RMT]. +. *Mirror SUSE RPM repositories* - OS RPM repositories should be locally mirrored so that `os-pkg-update.service/os-migration.service` can have access to them. This can be achieved by using either link:https://documentation.suse.com/sles/15-SP6/html/SLES-all/book-rmt.html[RMT] or link:https://documentation.suse.com/suma/5.0/en/suse-manager/index.html[SUMA]. === Update procedure @@ -94,7 +94,25 @@ _Air-gapped:_ This section assumes you will be deploying the `OS upgrade` *SUC Plan* using <>. If you intend to deploy the *SUC Plan* using a different approach, refer to <>. ==== -The `OS upgrade procedure` revolves around deploying *SUC Plans* to downstream clusters. These plans then hold information about how and on which nodes to deploy the `os-pkg-update.service/os-migration.service`. For information regarding the structure of a *SUC Plan*, refer to the https://github.com/rancher/system-upgrade-controller?tab=readme-ov-file#example-plans[upstream] documentation. +[IMPORTANT] +==== +For environments previously upgraded using this procedure, users should ensure that *one* of the following steps is completed: + +* `Remove any previously deployed SUC Plans related to older Edge release versions from the downstream cluster` - can be done by removing the desired _downstream_ cluster from the existing `GitRepo/Bundle` target configuration, or removing the `GitRepo/Bundle` resource altogether. + +* `Reuse the existing GitRepo/Bundle resource` - can be done by pointing the resource's revision to a new tag that holds the correct fleets for the desired `suse-edge/fleet-examples` link:https://github.com/suse-edge/fleet-examples/releases[release]. + +This is done in order to avoid clashes between `SUC Plans` for older Edge release versions. + +If users attempt to upgrade, while there are existing `SUC Plans` on the _downstream_ cluster, they will see the following fleet error: + +[,bash] +---- +Not installed: Unable to continue with install: Plan in namespace exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error.. +---- +==== + +The `OS upgrade procedure` revolves around deploying *SUC Plans* to downstream clusters. These plans hold information about how and on which nodes to deploy the `os-pkg-update.service/os-migration.service`. For information regarding the structure of a *SUC Plan*, refer to the https://github.com/rancher/system-upgrade-controller?tab=readme-ov-file#example-plans[upstream] documentation. `OS upgrade` SUC Plans are shipped in the following ways: @@ -109,7 +127,7 @@ For a full overview of what happens during the _upgrade procedure_, refer to the [#os-update-overview] ==== Overview -This section aims to describe the full workflow that the *_OS upgrade process_* goes throught from start to finish. +This section aims to describe the full workflow that the *_OS upgrade process_* goes through from start to finish. .OS upgrade workflow image::day2_os_pkg_update_diagram.png[] @@ -130,9 +148,9 @@ OS upgrade steps: . `Fleet` then proceeds to deploy the `Kubernetes resources` from this *Bundle* to all the targeted `downstream clusters`. In the context of `OS upgrades`, Fleet deploys the following resources from the *Bundle*: -.. *Agent SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_agent_* nodes. It is *not* interpreted if the cluster consists only from _control-plane_ nodes. It executes after all control-plane *SUC* plans have completed successfully. +.. *Worker SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_worker_* nodes. It is *not* interpreted if the cluster consists only from _control-plane_ nodes. It executes after all control-plane *SUC* plans have completed successfully. -.. *Control-plane SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_control-plane_* nodes. +.. *Control Plane SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_control-plane_* nodes. .. *Script Secret* - referenced in each *SUC Plan*; ships an `upgrade.sh` script responsible for creating the `os-pkg-update.service/os-migration.service` which will do the actual OS upgrade. @@ -155,15 +173,15 @@ The above resources will be deployed in the `cattle-system` namespace of each do ... For `os-pkg-update.service`: -.... Update all package version on the node OS, by running `transactional-update cleanup up` +.... Update all package versions on the node OS, by running `transactional-update cleanup up` .... After a successful `transactional-update`, schedule a system *reboot* so that the package version updates can take effect ... For `os-migration.service`: -.... Update all package version on the node OS, by running `transactional-update cleanup up`. This is done to ensure that no old package versions causes an OS migration error. +.... Update all package versions on the node OS, by running `transactional-update cleanup up`. This is done to ensure that no old package versions cause an OS migration error. -.... Proceed to migrate the OS to the desired values. Migration is done by utilising the `zypper migration` command. +.... Proceed to migrate the OS to the desired values. Migration is done by utilizing the `zypper migration` command. .... Schedule a system *reboot* so that the migration can take effect @@ -192,37 +210,32 @@ Once deployed, to monitor the OS upgrade process of the nodes of your targeted c [#os-upgrade-suc-plan-deployment-git-repo-rancher] ===== GitRepo creation - Rancher UI -. In the upper left corner, *☰ -> Continuous Delivery* +To create a `GitRepo` resource through the Rancher UI, follow their official link:https://ranchermanager.docs.rancher.com/integrations-in-rancher/fleet/overview#accessing-fleet-in-the-rancher-ui[documentation]. -. Go to *Git Repos -> Add Repository* +The Edge team maintains a ready to use link:https://github.com/suse-edge/fleet-examples/tree/release-3.1.1/fleets/day2/system-upgrade-controller-plans/os-upgrade[fleet] that users can add as a `path` for their GitRepo resource. -If you use the `suse-edge/fleet-examples` repository: - -. *Repository URL* - `https://github.com/suse-edge/fleet-examples.git` - -. *Watch -> Revision* - choose a link:https://github.com/suse-edge/fleet-examples/releases[release] tag for the `suse-edge/fleet-examples` repository that you wish to use - -. Under *Paths* add the path to the OS upgrade Fleets that you wish to use - `fleets/day2/system-upgrade-controller-plans/os-upgrade` +[IMPORTANT] +==== +Always use this fleet from a valid Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag. +==== -. Select *Next* to move to the *target* configuration section. *Only select clusters whose node's packages you wish to upgrade* +For use-cases where no custom tolerations need to be included to the `SUC plans` that the fleet ships, users can directly refer the `os-upgrade` fleet from the `suse-edge/fleet-examples` repository. -. *Create* +In cases where custom tolerations are needed, users should refer the `os-upgrade` fleet from a separate repository, allowing them to add the tolerations to the SUC plans as required. -Alternatively, if you decide to use your own repository to host these files, you would need to provide your repo data above. +An example of how a `GitRepo` can be configured to use the fleet from the `suse-edge/fleet-examples` repository, can be viewed link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/gitrepos/day2/os-upgrade-gitrepo.yaml[here]. [#os-upgrade-suc-plan-deployment-git-repo-manual] ===== GitRepo creation - manual -. Choose the desired Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to apply the OS *SUC update Plans* from (referenced below as `$\{REVISION\}`). - . Pull the *GitRepo* resource: + [,bash] ---- -curl -o os-update-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/gitrepos/day2/os-update-gitrepo.yaml +curl -o os-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/gitrepos/day2/os-upgrade-gitrepo.yaml ---- -. Edit the *GitRepo* configuration, under `spec.targets` specify your desired target list. By default the `GitRepo` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any down stream clusters. +. Edit the *GitRepo* configuration, under `spec.targets` specify your desired target list. By default the `GitRepo` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any downstream clusters. ** To match all clusters change the default `GitRepo` *target* to: + @@ -240,7 +253,7 @@ spec: + [,bash] ---- -kubectl apply -f os-update-gitrepo.yaml +kubectl apply -f os-upgrade-gitrepo.yaml ---- . View the created *GitRepo* resource under the `fleet-default` namespace: @@ -251,7 +264,7 @@ kubectl get gitrepo os-upgrade -n fleet-default # Example output NAME REPO COMMIT BUNDLEDEPLOYMENTS-READY STATUS -os-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.0 0/0 +os-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.1 0/0 ---- [#os-upgrade-suc-plan-deployment-bundle] @@ -268,6 +281,15 @@ Once deployed, to monitor the OS upgrade process of the nodes of your targeted c [#os-upgrade-suc-plan-deployment-bundle-rancher] ===== Bundle creation - Rancher UI +The Edge team maintains a ready to use link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml[bundle] that can be used in the below steps. + +[IMPORTANT] +==== +Always use this bundle from a valid Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag. +==== + +To create a bundle through Rancher's UI: + . In the upper left corner, click *☰ -> Continuous Delivery* . Go to *Advanced* > *Bundles* @@ -275,10 +297,15 @@ Once deployed, to monitor the OS upgrade process of the nodes of your targeted c . Select *Create from YAML* . From here you can create the Bundle in one of the following ways: ++ +[NOTE] +==== +There might be use-cases where you would need to include custom tolerations to the `SUC plans` that the bundle ships. Make sure to include those tolerations in the bundle that will be generated by the below steps. +==== -.. By manually copying the *Bundle* content to the *Create from YAML* page. Content can be retrieved from https://raw.githubusercontent.com/suse-edge/fleet-examples/$\{REVISION\}/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml, where `$\{REVISION\}` is the Edge link:https://github.com/suse-edge/fleet-examples/releases[release] that you are using +.. By manually copying the link:https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml[bundle content] from `suse-edge/fleet-examples` to the *Create from YAML* page. -.. By cloning the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository to the desired link:https://github.com/suse-edge/fleet-examples/releases[release] tag and selecting the *Read from File* option in the *Create from YAML* page. From there, navigate to `bundles/day2/system-upgrade-controller-plans/os-upgrade` directory and select `os-upgrade-bundle.yaml`. This will auto-populate the *Create from YAML* page with the Bundle content. +.. By cloning the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository from the desired link:https://github.com/suse-edge/fleet-examples/releases[release] tag and selecting the *Read from File* option in the *Create from YAML* page. From there, navigate to the bundle location (`bundles/day2/system-upgrade-controller-plans/os-upgrade`) and select the bundle file. This will auto-populate the *Create from YAML* page with the bundle content. . Change the *target* clusters for the `Bundle`: @@ -298,16 +325,14 @@ spec: [#os-upgrade-suc-plan-deployment-bundle-manual] ===== Bundle creation - manual -. Choose the desired Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to apply the OS upgrade *SUC Plans* from (referenced below as `$\{REVISION\}`). - . Pull the *Bundle* resource: + [,bash] ---- -curl -o os-upgrade-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml +curl -o os-upgrade-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml ---- -. Edit the `Bundle` *target* configurations, under `spec.targets` provide your desired target list. By default the `Bundle` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any down stream clusters. +. Edit the `Bundle` *target* configurations, under `spec.targets` provide your desired target list. By default the `Bundle` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any downstream clusters. ** To match all clusters change the default `Bundle` *target* to: + @@ -343,11 +368,13 @@ To get the OS upgrade resources that you need, first determine the Edge link:htt After that, resources can be found at `fleets/day2/system-upgrade-controller-plans/os-upgrade`, where: -* `plan-control-plane.yaml` - `system-upgrade-controller` Plan resource for *control-plane* nodes +* `plan-control-plane.yaml` - `system-upgrade-controller` Plan resource for *control-plane* nodes. + +* `plan-worker.yaml` - `system-upgrade-controller` Plan resource for *worker* nodes. -* `plan-agent.yaml` - `system-upgrade-controller` Plan resource for *agent* nodes +* `secret.yaml` - secret that ships the `upgrade.sh` script. -* `secret.yaml` - secret that ships a script that creates the `os-pkg-update.service/os-migration.service` link:https://www.freedesktop.org/software/systemd/man/latest/systemd.service.html[systemd.service] +* `config-map.yaml` - ConfigMap that provides upgrade configurations that are consumed by the `upgrade.sh` script. [IMPORTANT] ==== diff --git a/asciidoc/day2/downstream-clusters-introduction.adoc b/asciidoc/day2/downstream-clusters-introduction.adoc index 7179ca2e..2df199b8 100644 --- a/asciidoc/day2/downstream-clusters-introduction.adoc +++ b/asciidoc/day2/downstream-clusters-introduction.adoc @@ -23,13 +23,13 @@ This section is meant to be a *starting point* for the `Day 2` operations docume [#day2-downstream-components] === Components -Below you can find a description of the default components that should be setup on either your `management cluster` or your `downstream clusters` so that you can successfully perform `Day 2` operations. +Below you can find a description of the default components that should be set up on either your `management cluster` or your `downstream clusters` so that you can successfully perform `Day 2` operations. ==== Rancher [NOTE] ==== -For use-cases where you want to utilise <> without Rancher, you can skip the Rancher component all together. +For use-cases where you want to utilize <> without Rancher, you can skip the Rancher component altogether. ==== Responsible for the management of your `downstream clusters`. Should be deployed on your `management cluster`. @@ -56,7 +56,9 @@ For use-cases, where a third party GitOps tool usage is desired, see: . For `Kubernetes distribution upgrades` - <> -. For `Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <> page and populate the chart version and URL in your third party GitOps tool +. For `EIB deployed Helm chart upgrades` - <> + +. For `non-EIB deployed Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <> page and populate the chart version and URL in your third party GitOps tool ==== ==== System Upgrade Controller (SUC) @@ -83,7 +85,7 @@ Below you can find more information regarding what these resources do and for wh A `GitRepo` is a <> resource that represents a Git repository from which `Fleet` can create `Bundles`. Each `Bundle` is created based on configuration paths defined inside of the `GitRepo` resource. For more information, see the https://fleet.rancher.io/gitrepo-add[GitRepo] documentation. -In terms of `Day 2` operations `GitRepo` resources are normally used to deploy `SUC` or `SUC Plans` on *non air-gapped* environments that utilise a _Fleet GitOps_ approach. +In terms of `Day 2` operations, `GitRepo` resources are normally used to deploy `SUC` or `SUC Plans` on *non air-gapped* environments that utilize a _Fleet GitOps_ approach. Alternatively, `GitRepo` resources can also be used to deploy `SUC` or `SUC Plans` on *air-gapped* environments, *if you mirror your repository setup through a local git server*. @@ -91,7 +93,7 @@ Alternatively, `GitRepo` resources can also be used to deploy `SUC` or `SUC Plan `Bundles` hold *raw* Kubernetes resources that will be deployed on the targeted cluster. Usually they are created from a `GitRepo` resource, but there are use-cases where they can be deployed manually. For more information refer to the https://fleet.rancher.io/bundle-add[Bundle] documentation. -In terms of `Day 2` operations `Bundle` resources are normally used to deploy `SUC` or `SUC Plans` on *air-gapped* environments that do not use some form of _local GitOps_ procedure (e.g. a *local git server*). +In terms of `Day 2` operations, `Bundle` resources are normally used to deploy `SUC` or `SUC Plans` on *air-gapped* environments that do not use some form of _local GitOps_ procedure (e.g. a *local git server*). Alternatively, if your use-case does not allow for a _GitOps_ workflow (e.g. using a Git repository), *Bundle* resources could also be used to deploy `SUC` or `SUC Plans` on *non air-gapped* environments. From 8492354d445ec1ed86d7eddc85031b78163f6b11 Mon Sep 17 00:00:00 2001 From: Steven Hardy Date: Tue, 12 Nov 2024 15:40:09 +0000 Subject: [PATCH 3/8] Prepare releasenotes for 3.1.1 release (cherry picked from commit 83f6daa4a6d76a676fcbb2b68901f98188e091eb) --- asciidoc/edge-book/releasenotes.adoc | 113 ++++++++++++++++++++++++++- 1 file changed, 111 insertions(+), 2 deletions(-) diff --git a/asciidoc/edge-book/releasenotes.adoc b/asciidoc/edge-book/releasenotes.adoc index cbfd151b..bb987f35 100644 --- a/asciidoc/edge-book/releasenotes.adoc +++ b/asciidoc/edge-book/releasenotes.adoc @@ -28,6 +28,115 @@ However, repeated entries are provided as a courtesy only. Therefore, if you are NOTE: SUSE Edge z-stream releases are tightly integrated and thoroughly tested as a versioned stack. Upgrade of any individual components to a different versions to those listed above is likely to result in system downtime. While it's possible to run Edge clusters in untested configurations, it is not recommended, and it may take longer to provide resolution through the support channels. += Release 3.1.1 + +Availability Date: 15th November 2024 + +Summary: SUSE Edge 3.1.1 is the first release z-stream in the SUSE Edge 3.1 release stream. + +== New Features + +* The NeuVector version is updated to `5.4.0` which provides several new features: https://open-docs.neuvector.com/releasenotes/5x#release-notes-for-5x[Release Notes] + +== Bug & Security Fixes + +* The Rancher version is updated to `2.9.3`: https://github.com/rancher/rancher/releases/tag/v2.9.3[Release Notes] +* The RKE2 version is updated to `1.30.5`: https://docs.rke2.io/release-notes/v1.30.X#release-v1305rke2r1[Release Notes] +* The K3s version is updated to `1.30.5`: https://docs.k3s.io/release-notes/v1.30.X#release-v1305k3s1[Release Notes] +* The Metal^3^ chart fixes an issue with the handling of the `predictableNicNames` parameter: https://github.com/suse-edge/charts/pull/160[SUSE Edge issue #160] +* The Metal^3^ chart resolves security issues identified in https://www.cve.org/CVERecord?id=CVE-2024-43803:[CVE-2024-43803]: https://github.com/suse-edge/charts/pull/162[SUSE Edge issue #162] +* The Metal^3^ chart resolves security issues identified in https://www.cve.org/CVERecord?id=CVE-2024-44082:[CVE-2024-44082]: https://github.com/suse-edge/charts/pull/160[SUSE Edge issue #160] +* The RKE2 CAPI provider is updated to resolve an issue where ETCD becomes unavailable on update: https://github.com/rancher/cluster-api-provider-rke2/issues/449[RKE2 provider issue #449] + +== Components Versions + +The following table describes the individual components that make up the 3.1.1 release, including the version, the Helm chart version (if applicable), and from where the released artifact can be pulled in the binary format. Please follow the associated documentation for usage and deployment examples. Note that items in bold are highlighted changes from the previous z-stream release. + +|====== +| Name | Version | Helm Chart Version | Artifact Location (URL/Image) +| SLE Micro | 6.0 (latest) | N/A | https://www.suse.com/download/sle-micro/[SLE Micro Download Page] + +SL-Micro.x86_64-6.0-Base-SelfInstall-GM2.install.iso (sha256 bc7c3210c8a9b688d2713ad87f17e2c90cb99fd6dee1db528a5ff7f239cbcf79) + +SL-Micro.x86_64-6.0-Base-RT-SelfInstall-GM2.install.iso (sha256 8242895e21745aec15ef526a95272887fa95dd832782b2cea4a95f41493f6648) + +SL-Micro.x86_64-6.0-Base-GM2.raw.xz (sha256 7ae13d080e66c8b35624b6566b5eaff0875c8c141d0def9fbaee5876781ed81b) + +SL-Micro.x86_64-6.0-Base-RT-GM2.raw.xz (sha256 9a19078c062ab52c62c0254e11f5a5a9fac938fd094abff5aa5eac2ec00b2d4e) + +| SUSE Manager | 5.0.0 | N/A | https://www.suse.com/download/suse-manager/[SUSE Manager Download Page] +s| K3s s| 1.30.5 | N/A | https://github.com/k3s-io/k3s/releases/tag/v1.30.5%2Bk3s1[Upstream K3s Release] +s| RKE2 s| 1.30.5 | N/A | https://github.com/rancher/rke2/releases/tag/v1.30.5%2Brke2r1[Upstream RKE2 Release] +s| Rancher Prime s| 2.9.3 s| 2.9.3 | https://github.com/rancher/rancher/releases/download/v2.9.3/rancher-images.txt[Rancher 2.9.3 Images] + + https://charts.rancher.com/server-charts/prime[Rancher Prime Helm Repo] +| Longhorn | 1.7.1 | 104.2.0+up1.7.1 | https://raw.githubusercontent.com/longhorn/longhorn/v1.7.1/deploy/longhorn-images.txt[Longhorn 1.7.1 Images] + +https://charts.longhorn.io[Longhorn Helm Repo] +| NM Configurator | 0.3.1 | N/A | https://github.com/suse-edge/nm-configurator/releases/tag/v0.3.1[NMConfigurator Upstream Release] +s| NeuVector s| 5.4.0 s| 104.0.2+up2.8.0 | *registry.suse.com/rancher/mirrored-neuvector-controller:5.4.0* + +*registry.suse.com/rancher/mirrored-neuvector-enforcer:5.4.0* + +*registry.suse.com/rancher/mirrored-neuvector-manager:5.4.0* + +*registry.suse.com/rancher/mirrored-neuvector-prometheus-exporter:5.4.0* + +*registry.suse.com/rancher/mirrored-neuvector-compliance-config:1.0.0* + +*registry.suse.com/rancher mirrored-neuvector-registry-adapter:0.1.2* + +registry.suse.com/rancher/mirrored-neuvector-scanner:latest + +registry.suse.com/rancher/mirrored-neuvector-updater:latest +s| Rancher Turtles (CAPI) | 0.11 s| 0.3.3 | *registry.suse.com/edge/3.1/rancher-turtles-chart:0.3.3* + +registry.rancher.com/rancher/rancher/turtles:v0.11.0 + +registry.suse.com/edge/3.1/cluster-api-operator:0.12.0 + +registry.suse.com/edge/3.1/cluster-api-controller:1.7.5 + +registry.suse.com/edge/3.1/cluster-api-provider-metal3:1.7.1 + +*registry.suse.com/edge/3.1/cluster-api-provider-rke2-bootstrap:0.7.1* + +*registry.suse.com/edge/3.1/cluster-api-provider-rke2-controlplane:0.7.1* +s| Metal^3^ | 0.8.3 s| 0.8.3 | *registry.suse.com/edge/3.1/metal3-chart:0.8.3* + +*registry.suse.com/edge/3.1/baremetal-operator:0.6.2* + +registry.suse.com/edge/3.1/ip-address-manager:1.7.1 + +*registry.suse.com/edge/3.1/ironic:24.1.3.0* + +*registry.suse.com/edge/3.1/ironic-ipa-downloader:2.0.1* + +registry.suse.com/edge/3.1/kube-rbac-proxy:v0.18.0 + +registry.suse.com/edge/mariadb:10.6.15.1 +| MetalLB | 0.14.9 | 0.14.9 | registry.suse.com/edge/3.1/metallb-chart:0.14.9 + +registry.suse.com/edge/3.1/metallb-controller:v0.14.9 + +registry.suse.com/edge/3.1/metallb-speaker:v0.14.9 + +registry.suse.com/edge/3.1/frr:8.4 + +registry.suse.com/edge/3.1/frr-k8s:v0.0.14 +| Elemental | 1.6.4 | 104.2.0+up1.6.4 | registry.suse.com/rancher/elemental-operator-chart:1.6.4 + +registry.suse.com/rancher/elemental-operator-crds-chart:1.6.4 + +registry.suse.com/rancher/elemental-operator:1.6.4 +| Elemental Dashboard Extension | 2.0.0 | 2.0.0 | link:https://github.com/rancher/ui-plugin-charts/tree/2.1.0/charts/elemental/2.0.0[Elemental Extension chart] +| Edge Image Builder | 1.1 | N/A | registry.suse.com/edge/3.1/edge-image-builder:1.1.0 +| KubeVirt | 1.3.1 | 0.4.0 | registry.suse.com/edge/3.1/kubevirt-chart:0.4.0 + +registry.suse.com/suse/sles/15.6/virt-operator:1.3.1 + +registry.suse.com/suse/sles/15.6/virt-api:1.3.1 + +registry.suse.com/suse/sles/15.6/virt-controller:1.3.1 + +registry.suse.com/suse/sles/15.6/virt-exportproxy:1.3.1 + +registry.suse.com/suse/sles/15.6/virt-exportserver:1.3.1 + +registry.suse.com/suse/sles/15.6/virt-handler:1.3.1 + +registry.suse.com/suse/sles/15.6/virt-launcher:1.3.1 +| KubeVirt Dashboard Extension | 1.1.0 | 1.1.0 | registry.suse.com/edge/3.1/kubevirt-dashboard-extension-chart:1.1.0 +| Containerized Data Importer | 1.60.1 | 0.4.0 | registry.suse.com/edge/3.1/cdi-chart:0.4.0 + +registry.suse.com/suse/sles/15.6/cdi-operator:1.60.1 + +registry.suse.com/suse/sles/15.6/cdi-controller:1.60.1 + +registry.suse.com/suse/sles/15.6/cdi-importer:1.60.1 + +registry.suse.com/suse/sles/15.6/cdi-cloner:1.60.1 + +registry.suse.com/suse/sles/15.6/cdi-apiserver:1.60.1 + +registry.suse.com/suse/sles/15.6/cdi-uploadserver:1.60.1 + +registry.suse.com/suse/sles/15.6/cdi-uploadproxy:1.60.1 +| Endpoint Copier Operator | 0.2.0 | 0.2.1 | registry.suse.com/edge/3.1/endpoint-copier-operator:v0.2.1 + +registry.suse.com/edge/3.1/endpoint-copier-operator-chart:0.2.1 +| Akri (Tech Preview) | 0.12.20 | 0.12.20 | registry.suse.com/edge/3.1/akri-chart:0.12.20 + +registry.suse.com/edge/3.1/akri-dashboard-extension-chart:1.1.0 + +registry.suse.com/edge/3.1/akri-agent:v0.12.20 + +registry.suse.com/edge/3.1/akri-controller:v0.12.20 + +registry.suse.com/edge/3.1/akri-debug-echo-discovery-handler:v0.12.20 + +registry.suse.com/edge/3.1/akri-onvif-discovery-handler:v0.12.20 + +registry.suse.com/edge/3.1/akri-opcua-discovery-handler:v0.12.20 + +registry.suse.com/edge/3.1/akri-udev-discovery-handler:v0.12.20 + +registry.suse.com/edge/3.1/akri-webhook-configuration:v0.12.20 +| SR-IOV Network Operator | 1.3.0 | 1.3.0 | registry.suse.com/edge/3.1/sriov-network-operator-chart:1.3.0 + +registry.suse.com/edge/3.1/sriov-crd-chart:1.3.0 +| System Upgrade Controller | 0.13.4 | 104.0.0+up0.7.0 | link:https://charts.rancher.io[System Upgrade Controller chart] + +registry.suse.com/rancher/system-upgrade-controller:v0.13.4 +| Upgrade Controller | 0.1.0 | 0.1.0 | registry.suse.com/edge/3.1/upgrade-controller-chart:0.1.0 + +registry.suse.com/edge/3.1/upgrade-controller:0.1.0 + +registry.suse.com/edge/3.1/kubectl:1.30.3 + +*registry.suse.com/edge/3.1/release-manifest:3.1.1* +|====== + = Release 3.1.0 Availability Date: 11th October 2024 @@ -74,7 +183,7 @@ Summary: SUSE Edge 3.1.0 is the first release in the SUSE Edge 3.1 release strea == Components Versions -The following table describes the individual components that make up the 3.1 release, including the version, the Helm chart version (if applicable), and from where the released artifact can be pulled in the binary format. Please follow the associated documentation for usage and deployment examples. Note that items in bold are highlighted changes from the previous z-stream release. +The following table describes the individual components that make up the 3.1 release, including the version, the Helm chart version (if applicable), and from where the released artifact can be pulled in the binary format. Please follow the associated documentation for usage and deployment examples. |====== | Name | Version | Helm Chart Version | Artifact Location (URL/Image) @@ -105,7 +214,7 @@ registry.suse.com/edge/3.1/cluster-api-controller:1.7.5 + registry.suse.com/edge/3.1/cluster-api-provider-metal3:1.7.1 + registry.suse.com/edge/3.1/cluster-api-provider-rke2-bootstrap:0.7.0 + registry.suse.com/edge/3.1/cluster-api-provider-rke2-controlplane:0.7.0 -| Metal^3^ | 1.16.0 | 0.8.1 | registry.suse.com/edge/3.1/metal3-chart:0.8.1 + +| Metal^3^ | 0.8.1 | 0.8.1 | registry.suse.com/edge/3.1/metal3-chart:0.8.1 + registry.suse.com/edge/3.1/baremetal-operator:0.6.1 + registry.suse.com/edge/3.1/ip-address-manager:1.7.1 + registry.suse.com/edge/3.1/ironic:24.1.2.0 + From ef2e6cf67ba56db8af7f19da9809d068c33316b9 Mon Sep 17 00:00:00 2001 From: Steven Hardy Date: Tue, 12 Nov 2024 15:42:50 +0000 Subject: [PATCH 4/8] Update RKE2/K3s version to 1.30.5 (cherry picked from commit 1e5cd91ccceb95dfd5729eafd7508030ec3f3217) --- asciidoc/components/longhorn.adoc | 2 +- asciidoc/components/virtualization.adoc | 6 ++--- asciidoc/edge-book/version-matrix.adoc | 4 +-- .../guides/air-gapped-eib-deployments.adoc | 26 +++++++++---------- asciidoc/integrations/nvidia-slemicro.adoc | 4 +-- asciidoc/misc/rke2-selinux.adoc | 2 +- .../product/atip-automated-provision.adoc | 6 ++--- asciidoc/product/atip-management-cluster.adoc | 14 +++++----- asciidoc/quickstart/eib.adoc | 4 +-- asciidoc/quickstart/elemental.adoc | 4 +-- asciidoc/quickstart/metal3.adoc | 6 ++--- 11 files changed, 39 insertions(+), 39 deletions(-) diff --git a/asciidoc/components/longhorn.adoc b/asciidoc/components/longhorn.adoc index f2441def..1e1dde7a 100644 --- a/asciidoc/components/longhorn.adoc +++ b/asciidoc/components/longhorn.adoc @@ -264,7 +264,7 @@ image: arch: x86_64 outputImageName: eib-image.iso kubernetes: - version: v1.30.3+rke2r1 + version: v1.30.5+rke2r1 helm: charts: - name: longhorn diff --git a/asciidoc/components/virtualization.adoc b/asciidoc/components/virtualization.adoc index 66e793d1..e0afbc3e 100644 --- a/asciidoc/components/virtualization.adoc +++ b/asciidoc/components/virtualization.adoc @@ -66,9 +66,9 @@ This should show something similar to the following: [,shell] ---- NAME STATUS ROLES AGE VERSION -node1.edge.rdo.wales Ready control-plane,etcd,master 4h20m v1.30.3+rke2r1 -node2.edge.rdo.wales Ready control-plane,etcd,master 4h15m v1.30.3+rke2r1 -node3.edge.rdo.wales Ready control-plane,etcd,master 4h15m v1.30.3+rke2r1 +node1.edge.rdo.wales Ready control-plane,etcd,master 4h20m v1.30.5+rke2r1 +node2.edge.rdo.wales Ready control-plane,etcd,master 4h15m v1.30.5+rke2r1 +node3.edge.rdo.wales Ready control-plane,etcd,master 4h15m v1.30.5+rke2r1 ---- Now you can proceed to install the *KubeVirt* and *Containerized Data Importer (CDI)* Helm charts: diff --git a/asciidoc/edge-book/version-matrix.adoc b/asciidoc/edge-book/version-matrix.adoc index 7340fc86..33949a20 100644 --- a/asciidoc/edge-book/version-matrix.adoc +++ b/asciidoc/edge-book/version-matrix.adoc @@ -19,8 +19,8 @@ endif::[] | SLE Micro | 6 | N/A | Rancher Prime | 2.9.1 | 2.9.1 | Fleet | 0.10.1 | 104.0.1+up0.10.1 -| K3s | 1.30.3 | N/A -| RKE2 | 1.30.3 | N/A +| K3s | 1.30.5 | N/A +| RKE2 | 1.30.5 | N/A | Metal^3^ | 1.16.0 | 0.8.1 | MetalLB | 0.14.9 | 0.14.9 | Elemental | 1.6.4 | 104.2.0+up1.6.4 diff --git a/asciidoc/guides/air-gapped-eib-deployments.adoc b/asciidoc/guides/air-gapped-eib-deployments.adoc index ef1c54e7..494fa7f7 100644 --- a/asciidoc/guides/air-gapped-eib-deployments.adoc +++ b/asciidoc/guides/air-gapped-eib-deployments.adoc @@ -174,7 +174,7 @@ operatingSystem: - username: root encryptedPassword: $6$jHugJNNd3HElGsUZ$eodjVe4te5ps44SVcWshdfWizrP.xAyd71CVEXazBJ/.v799/WRCBXxfYmunlBO2yp1hm/zb4r8EmnrrNCF.P/ kubernetes: - version: v1.30.3+rke2r1 + version: v1.30.5+rke2r1 embeddedArtifactRegistry: images: - ... @@ -184,8 +184,8 @@ The `image` section is required, and it specifies the input image, its architect The `operatingSystem` section is optional, and contains configuration to enable login on the provisioned systems with the `root/eib` username/password. -The `kubernetes` section is optional, and it defines the Kubernetes type and version. We are going to use Kubernetes 1.30.3 and RKE2 by default. -Use `kubernetes.version: v1.30.3+k3s1` if K3s is desired instead. Unless explicitly configured via the `kubernetes.nodes` field, all clusters we bootstrap in this guide will be single-node ones. +The `kubernetes` section is optional, and it defines the Kubernetes type and version. We are going to use Kubernetes 1.30.5 and RKE2 by default. +Use `kubernetes.version: v1.30.5+k3s1` if K3s is desired instead. Unless explicitly configured via the `kubernetes.nodes` field, all clusters we bootstrap in this guide will be single-node ones. The `embeddedArtifactRegistry` section will include all images which are only referenced and pulled at runtime for the specific component. @@ -215,7 +215,7 @@ operatingSystem: - username: root encryptedPassword: $6$jHugJNNd3HElGsUZ$eodjVe4te5ps44SVcWshdfWizrP.xAyd71CVEXazBJ/.v799/WRCBXxfYmunlBO2yp1hm/zb4r8EmnrrNCF.P/ kubernetes: - version: v1.30.3+rke2r1 + version: v1.30.5+rke2r1 network: apiVIP: 192.168.100.151 manifests: @@ -258,12 +258,12 @@ embeddedArtifactRegistry: - name: registry.rancher.com/rancher/hardened-etcd:v3.5.13-k3s1-build20240531 - name: registry.rancher.com/rancher/hardened-flannel:v0.25.4-build20240610 - name: registry.rancher.com/rancher/hardened-k8s-metrics-server:v0.7.1-build20240401 - - name: registry.rancher.com/rancher/hardened-kubernetes:v1.30.3-rke2r1-build20240717 + - name: registry.rancher.com/rancher/hardened-kubernetes:v1.30.5-rke2r1-build20240717 - name: registry.rancher.com/rancher/hardened-multus-cni:v4.0.2-build20240612 - name: registry.rancher.com/rancher/hardened-node-feature-discovery:v0.15.4-build20240513 - name: registry.rancher.com/rancher/hardened-whereabouts:v0.7.0-build20240429 - name: registry.rancher.com/rancher/helm-project-operator:v0.2.1 - - name: registry.rancher.com/rancher/k3s-upgrade:v1.30.3-k3s1 + - name: registry.rancher.com/rancher/k3s-upgrade:v1.30.5-k3s1 - name: registry.rancher.com/rancher/klipper-helm:v0.8.4-build20240523 - name: registry.rancher.com/rancher/klipper-lb:v0.4.7 - name: registry.rancher.com/rancher/kube-api-auth:v0.2.2 @@ -281,12 +281,12 @@ embeddedArtifactRegistry: - name: registry.rancher.com/rancher/rancher:v2.9.1 - name: registry.rancher.com/rancher/rke-tools:v0.1.100 - name: registry.rancher.com/rancher/rke2-cloud-provider:v1.29.3-build20240515 - - name: registry.rancher.com/rancher/rke2-runtime:v1.30.3-rke2r1 - - name: registry.rancher.com/rancher/rke2-upgrade:v1.30.3-rke2r1 + - name: registry.rancher.com/rancher/rke2-runtime:v1.30.5-rke2r1 + - name: registry.rancher.com/rancher/rke2-upgrade:v1.30.5-rke2r1 - name: registry.rancher.com/rancher/security-scan:v0.2.16 - name: registry.rancher.com/rancher/shell:v0.2.1 - - name: registry.rancher.com/rancher/system-agent-installer-k3s:v1.30.3-k3s1 - - name: registry.rancher.com/rancher/system-agent-installer-rke2:v1.30.3-rke2r1 + - name: registry.rancher.com/rancher/system-agent-installer-k3s:v1.30.5-k3s1 + - name: registry.rancher.com/rancher/system-agent-installer-rke2:v1.30.5-rke2r1 - name: registry.rancher.com/rancher/system-agent:v0.3.8-suc - name: registry.rancher.com/rancher/system-upgrade-controller:v0.13.4 - name: registry.rancher.com/rancher/ui-plugin-catalog:2.0.1 @@ -415,7 +415,7 @@ operatingSystem: - username: root encryptedPassword: $6$jHugJNNd3HElGsUZ$eodjVe4te5ps44SVcWshdfWizrP.xAyd71CVEXazBJ/.v799/WRCBXxfYmunlBO2yp1hm/zb4r8EmnrrNCF.P/ kubernetes: - version: v1.30.3+rke2r1 + version: v1.30.5+rke2r1 helm: charts: - name: neuvector-crd @@ -547,7 +547,7 @@ operatingSystem: packageList: - open-iscsi kubernetes: - version: v1.30.3+rke2r1 + version: v1.30.5+rke2r1 helm: charts: - name: longhorn @@ -715,7 +715,7 @@ operatingSystem: - username: root encryptedPassword: $6$jHugJNNd3HElGsUZ$eodjVe4te5ps44SVcWshdfWizrP.xAyd71CVEXazBJ/.v799/WRCBXxfYmunlBO2yp1hm/zb4r8EmnrrNCF.P/ kubernetes: - version: v1.30.3+rke2r1 + version: v1.30.5+rke2r1 helm: charts: - name: kubevirt-chart diff --git a/asciidoc/integrations/nvidia-slemicro.adoc b/asciidoc/integrations/nvidia-slemicro.adoc index 57bc724e..c63628a5 100644 --- a/asciidoc/integrations/nvidia-slemicro.adoc +++ b/asciidoc/integrations/nvidia-slemicro.adoc @@ -272,7 +272,7 @@ This should show something similar to the following: [,shell] ---- NAME STATUS ROLES AGE VERSION -node0001 Ready control-plane,etcd,master 13d v1.30.3+rke2r1 +node0001 Ready control-plane,etcd,master 13d v1.30.5+rke2r1 ---- What you should find is that your k3s/rke2 installation has detected the NVIDIA Container Toolkit on the host and auto-configured the NVIDIA runtime integration into `containerd` (the Container Runtime Interface that k3s/rke2 use). Confirm this by checking the containerd `config.toml` file: @@ -467,7 +467,7 @@ operatingSystem: - url: https://nvidia.github.io/libnvidia-container/stable/rpm/x86_64 sccRegistrationCode: kubernetes: - version: v1.30.3+k3s1 + version: v1.30.5+k3s1 helm: charts: - name: nvidia-device-plugin diff --git a/asciidoc/misc/rke2-selinux.adoc b/asciidoc/misc/rke2-selinux.adoc index a5928ba6..978ef0c3 100644 --- a/asciidoc/misc/rke2-selinux.adoc +++ b/asciidoc/misc/rke2-selinux.adoc @@ -128,7 +128,7 @@ Install RKE2 Using Install Script [,bash] ---- curl -sfL https://get.rke2.io | INSTALL_RKE2_EXEC="server" \ - RKE2_SELINUX=true INSTALL_RKE2_VERSION=v1.30.3+rke2r1 sh - + RKE2_SELINUX=true INSTALL_RKE2_VERSION=v1.30.5+rke2r1 sh - # Enable and Start RKE2 systemctl enable --now rke2-server.service diff --git a/asciidoc/product/atip-automated-provision.adoc b/asciidoc/product/atip-automated-provision.adoc index 69a6ce32..21c6025b 100644 --- a/asciidoc/product/atip-automated-provision.adoc +++ b/asciidoc/product/atip-automated-provision.adoc @@ -706,7 +706,7 @@ The `RKE2ControlPlane` object specifies the control-plane configuration to be us Also, it contains the information about the number of replicas to be used (in this case, one) and the `CNI` plug-in to be used (in this case, `Cilium`). The agentConfig block contains the `Ignition` format to be used and the `additionalUserData` to be used to configure the `RKE2` node with information like a systemd named `rke2-preinstall.service` to replace automatically the `BAREMETALHOST_UUID` and `node-name` during the provisioning process using the Ironic information. To enable multus with cilium a file is created in the `rke2` server manifests directory named `rke2-cilium-config.yaml` with the configuration to be used. -The last block of information contains the Kubernetes version to be used. `$\{RKE2_VERSION\}` is the version of `RKE2` to be used replacing this value (for example, `v1.30.3+rke2r1`). +The last block of information contains the Kubernetes version to be used. `$\{RKE2_VERSION\}` is the version of `RKE2` to be used replacing this value (for example, `v1.30.5+rke2r1`). [,yaml] ---- @@ -983,7 +983,7 @@ The `RKE2ControlPlane` object specifies the control-plane configuration to be us ** The `storage` block which contains the Helm charts to be used to install the `MetalLB` and the `endpoint-copier-operator`. ** The `metalLB` custom resource file with the `IPaddressPool` and the `L2Advertisement` to be used (replacing `$\{EDGE_VIP_ADDRESS\}` with the `VIP` address). ** The `endpoint-svc.yaml` file to be used to configure the `kubernetes-vip` service to be used by the `MetalLB` to manage the `VIP` address. -* The last block of information contains the Kubernetes version to be used. The `$\{RKE2_VERSION\}` is the version of `RKE2` to be used replacing this value (for example, `v1.30.3+rke2r1`). +* The last block of information contains the Kubernetes version to be used. The `$\{RKE2_VERSION\}` is the version of `RKE2` to be used replacing this value (for example, `v1.30.5+rke2r1`). [,yaml] ---- @@ -1398,7 +1398,7 @@ To make the process clear, the changes required on that block (`RKE2ControlPlane ** `performance-settings.service` to enable the CPU performance tuning. ** `sriov-custom-auto-vfs.service` to install the `sriov` Helm chart, wait until custom resources are created and run the `/var/sriov-auto-filler.sh` to replace the values in the config map `sriov-custom-auto-config` and create the `sriovnetworknodepolicy` to be used by the workloads. -* The `$\{RKE2_VERSION\}` is the version of `RKE2` to be used replacing this value (for example, `v1.30.3+rke2r1`). +* The `$\{RKE2_VERSION\}` is the version of `RKE2` to be used replacing this value (for example, `v1.30.5+rke2r1`). With all these changes mentioned, the `RKE2ControlPlane` block in the `capi-provisioning-example.yaml` will look like the following: diff --git a/asciidoc/product/atip-management-cluster.adoc b/asciidoc/product/atip-management-cluster.adoc index d0d9c539..88e29cd8 100644 --- a/asciidoc/product/atip-management-cluster.adoc +++ b/asciidoc/product/atip-management-cluster.adoc @@ -366,7 +366,7 @@ kubernetes: # type: server ---- -where `version` is the version of Kubernetes to be installed. In our case, we are using an RKE2 cluster, so the version must be minor less than 1.29 to be compatible with `Rancher` (for example, `v1.30.3+rke2r1`). +where `version` is the version of Kubernetes to be installed. In our case, we are using an RKE2 cluster, so the version must be minor less than 1.29 to be compatible with `Rancher` (for example, `v1.30.5+rke2r1`). The `helm` section contains the list of Helm charts to be installed, the repositories to be used, and the version configuration for all of them. @@ -1100,12 +1100,12 @@ embeddedArtifactRegistry: - name: registry.rancher.com/rancher/hardened-etcd:v3.5.13-k3s1-build20240531 - name: registry.rancher.com/rancher/hardened-flannel:v0.25.4-build20240610 - name: registry.rancher.com/rancher/hardened-k8s-metrics-server:v0.7.1-build20240401 - - name: registry.rancher.com/rancher/hardened-kubernetes:v1.30.3-rke2r1-build20240717 + - name: registry.rancher.com/rancher/hardened-kubernetes:v1.30.5-rke2r1-build20240717 - name: registry.rancher.com/rancher/hardened-multus-cni:v4.0.2-build20240612 - name: registry.rancher.com/rancher/hardened-node-feature-discovery:v0.15.4-build20240513 - name: registry.rancher.com/rancher/hardened-whereabouts:v0.7.0-build20240429 - name: registry.rancher.com/rancher/helm-project-operator:v0.2.1 - - name: registry.rancher.com/rancher/k3s-upgrade:v1.30.3-k3s1 + - name: registry.rancher.com/rancher/k3s-upgrade:v1.30.5-k3s1 - name: registry.rancher.com/rancher/klipper-helm:v0.8.4-build20240523 - name: registry.rancher.com/rancher/klipper-lb:v0.4.7 - name: registry.rancher.com/rancher/kube-api-auth:v0.2.2 @@ -1123,12 +1123,12 @@ embeddedArtifactRegistry: - name: registry.rancher.com/rancher/rancher:v2.9.1 - name: registry.rancher.com/rancher/rke-tools:v0.1.100 - name: registry.rancher.com/rancher/rke2-cloud-provider:v1.29.3-build20240515 - - name: registry.rancher.com/rancher/rke2-runtime:v1.30.3-rke2r1 - - name: registry.rancher.com/rancher/rke2-upgrade:v1.30.3-rke2r1 + - name: registry.rancher.com/rancher/rke2-runtime:v1.30.5-rke2r1 + - name: registry.rancher.com/rancher/rke2-upgrade:v1.30.5-rke2r1 - name: registry.rancher.com/rancher/security-scan:v0.2.16 - name: registry.rancher.com/rancher/shell:v0.2.1 - - name: registry.rancher.com/rancher/system-agent-installer-k3s:v1.30.3-k3s1 - - name: registry.rancher.com/rancher/system-agent-installer-rke2:v1.30.3-rke2r1 + - name: registry.rancher.com/rancher/system-agent-installer-k3s:v1.30.5-k3s1 + - name: registry.rancher.com/rancher/system-agent-installer-rke2:v1.30.5-rke2r1 - name: registry.rancher.com/rancher/system-agent:v0.3.8-suc - name: registry.rancher.com/rancher/system-upgrade-controller:v0.13.4 - name: registry.rancher.com/rancher/ui-plugin-catalog:2.0.1 diff --git a/asciidoc/quickstart/eib.adoc b/asciidoc/quickstart/eib.adoc index 3bd7ba6b..ea1ea72a 100644 --- a/asciidoc/quickstart/eib.adoc +++ b/asciidoc/quickstart/eib.adoc @@ -209,7 +209,7 @@ In this next example, we're going to take our existing image definition and will [,yaml] ---- kubernetes: - version: v1.30.3+rke2r1 + version: v1.30.5+rke2r1 manifests: urls: - https://k8s.io/examples/application/nginx-app.yaml @@ -242,7 +242,7 @@ operatingSystem: additionalRepos: - url: https://nvidia.github.io/libnvidia-container/stable/rpm/x86_64 kubernetes: - version: v1.30.3+rke2r1 + version: v1.30.5+rke2r1 manifests: urls: - https://k8s.io/examples/application/nginx-app.yaml diff --git a/asciidoc/quickstart/elemental.adoc b/asciidoc/quickstart/elemental.adoc index 2a1d370c..aa944ad1 100644 --- a/asciidoc/quickstart/elemental.adoc +++ b/asciidoc/quickstart/elemental.adoc @@ -347,7 +347,7 @@ metadata: name: location-123 namespace: fleet-default spec: - kubernetesVersion: v1.30.3+k3s1 + kubernetesVersion: v1.30.5+k3s1 rkeConfig: machinePools: - name: pool1 @@ -370,7 +370,7 @@ The UI extension allows for a few shortcuts to be taken. Note that managing mult . As before, open the left three-dot menu and select "OS Management." This brings you back to the main screen for managing your Elemental systems. . On the left sidebar, click "Inventory of Machines." This opens the inventory of machines that have registered. . To create a cluster from these machines, select the systems you want, click the "Actions" drop-down list, then "Create Elemental Cluster." This opens the Cluster Creation dialog while also creating a MachineSelectorTemplate to use in the background. -. On this screen, configure the cluster you want to be built. For this quick start, K3s v1.30.3+k3s1 is selected and the rest of the options are left as is. +. On this screen, configure the cluster you want to be built. For this quick start, K3s v1.30.5+k3s1 is selected and the rest of the options are left as is. + [TIP] ==== diff --git a/asciidoc/quickstart/metal3.adoc b/asciidoc/quickstart/metal3.adoc index 8390961e..784a72e2 100644 --- a/asciidoc/quickstart/metal3.adoc +++ b/asciidoc/quickstart/metal3.adoc @@ -540,7 +540,7 @@ spec: ExecStartPost=/bin/sh -c "umount /mnt" [Install] WantedBy=multi-user.target - version: v1.30.3+rke2r1 + version: v1.30.5+rke2r1 --- apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3MachineTemplate @@ -633,7 +633,7 @@ spec: kind: Metal3MachineTemplate name: sample-cluster-workers nodeDrainTimeout: 0s - version: v1.30.3+rke2r1 + version: v1.30.5+rke2r1 --- apiVersion: bootstrap.cluster.x-k8s.io/v1alpha1 kind: RKE2ConfigTemplate @@ -645,7 +645,7 @@ spec: spec: agentConfig: format: ignition - version: v1.30.3+rke2r1 + version: v1.30.5+rke2r1 kubelet: extraArgs: - provider-id=metal3://BAREMETALHOST_UUID From 230d2b816270225d857920f45129174b488e6549 Mon Sep 17 00:00:00 2001 From: Steven Hardy Date: Tue, 12 Nov 2024 16:39:12 +0000 Subject: [PATCH 5/8] Update Rancher version (cherry picked from commit d6bcaf4c667a6aaa7b284e353fd80361bed3e6ee) --- asciidoc/edge-book/version-matrix.adoc | 6 +++--- asciidoc/guides/air-gapped-eib-deployments.adoc | 8 ++++---- asciidoc/product/atip-management-cluster.adoc | 10 +++++----- asciidoc/quickstart/elemental.adoc | 2 +- 4 files changed, 13 insertions(+), 13 deletions(-) diff --git a/asciidoc/edge-book/version-matrix.adoc b/asciidoc/edge-book/version-matrix.adoc index 33949a20..3627ece2 100644 --- a/asciidoc/edge-book/version-matrix.adoc +++ b/asciidoc/edge-book/version-matrix.adoc @@ -16,9 +16,9 @@ endif::[] [options="header"] |====== | Name | Version | Chart Version -| SLE Micro | 6 | N/A -| Rancher Prime | 2.9.1 | 2.9.1 -| Fleet | 0.10.1 | 104.0.1+up0.10.1 +| SLE Micro | 6.0 | N/A +| Rancher Prime | 2.9.3 | 2.9.3 +| Fleet | 0.10.4 | 104.1.0+up0.10.4 | K3s | 1.30.5 | N/A | RKE2 | 1.30.5 | N/A | Metal^3^ | 1.16.0 | 0.8.1 diff --git a/asciidoc/guides/air-gapped-eib-deployments.adoc b/asciidoc/guides/air-gapped-eib-deployments.adoc index 494fa7f7..e004e73f 100644 --- a/asciidoc/guides/air-gapped-eib-deployments.adoc +++ b/asciidoc/guides/air-gapped-eib-deployments.adoc @@ -196,7 +196,7 @@ The `embeddedArtifactRegistry` section will include all images which are only re The <> deployment that will be demonstrated will be highly slimmed down for demonstration purposes. For your actual deployments, additional artifacts may be necessary depending on your configuration. ==== -The https://github.com/rancher/rancher/releases/tag/v2.9.1[Rancher v2.9.1] release assets contain a `rancher-images.txt` file which lists all the images required for an air-gapped installation. +The https://github.com/rancher/rancher/releases/tag/v2.9.3[Rancher v2.9.3] release assets contain a `rancher-images.txt` file which lists all the images required for an air-gapped installation. There are over 600 container images in total which means that the resulting CRB image would be roughly 30GB. For our Rancher installation, we will strip down that list to the smallest working configuration. From there, you can add back any images you may need for your deployments. @@ -224,7 +224,7 @@ kubernetes: helm: charts: - name: rancher - version: 2.9.1 + version: 2.9.3 repositoryName: rancher-prime valuesFile: rancher-values.yaml targetNamespace: cattle-system @@ -275,10 +275,10 @@ embeddedArtifactRegistry: - name: registry.rancher.com/rancher/prometheus-federator:v0.3.4 - name: registry.rancher.com/rancher/pushprox-client:v0.1.3-rancher2-client - name: registry.rancher.com/rancher/pushprox-proxy:v0.1.3-rancher2-proxy - - name: registry.rancher.com/rancher/rancher-agent:v2.9.1 + - name: registry.rancher.com/rancher/rancher-agent:v2.9.3 - name: registry.rancher.com/rancher/rancher-csp-adapter:v4.0.0 - name: registry.rancher.com/rancher/rancher-webhook:v0.5.1 - - name: registry.rancher.com/rancher/rancher:v2.9.1 + - name: registry.rancher.com/rancher/rancher:v2.9.3 - name: registry.rancher.com/rancher/rke-tools:v0.1.100 - name: registry.rancher.com/rancher/rke2-cloud-provider:v1.29.3-build20240515 - name: registry.rancher.com/rancher/rke2-runtime:v1.30.5-rke2r1 diff --git a/asciidoc/product/atip-management-cluster.adoc b/asciidoc/product/atip-management-cluster.adoc index 88e29cd8..6a6da040 100644 --- a/asciidoc/product/atip-management-cluster.adoc +++ b/asciidoc/product/atip-management-cluster.adoc @@ -209,7 +209,7 @@ kubernetes: installationNamespace: kube-system valuesFile: neuvector.yaml - name: rancher - version: 2.9.1 + version: 2.9.3 repositoryName: rancher-prime targetNamespace: cattle-system createNamespace: true @@ -338,7 +338,7 @@ kubernetes: installationNamespace: kube-system valuesFile: neuvector.yaml - name: rancher - version: 2.9.1 + version: 2.9.3 repositoryName: rancher-prime targetNamespace: cattle-system createNamespace: true @@ -1056,7 +1056,7 @@ kubernetes: installationNamespace: kube-system valuesFile: neuvector.yaml - name: rancher - version: 2.9.1 + version: 2.9.3 repositoryName: rancher-prime targetNamespace: cattle-system createNamespace: true @@ -1117,10 +1117,10 @@ embeddedArtifactRegistry: - name: registry.rancher.com/rancher/prometheus-federator:v0.3.4 - name: registry.rancher.com/rancher/pushprox-client:v0.1.3-rancher2-client - name: registry.rancher.com/rancher/pushprox-proxy:v0.1.3-rancher2-proxy - - name: registry.rancher.com/rancher/rancher-agent:v2.9.1 + - name: registry.rancher.com/rancher/rancher-agent:v2.9.3 - name: registry.rancher.com/rancher/rancher-csp-adapter:v4.0.0 - name: registry.rancher.com/rancher/rancher-webhook:v0.5.1 - - name: registry.rancher.com/rancher/rancher:v2.9.1 + - name: registry.rancher.com/rancher/rancher:v2.9.3 - name: registry.rancher.com/rancher/rke-tools:v0.1.100 - name: registry.rancher.com/rancher/rke2-cloud-provider:v1.29.3-build20240515 - name: registry.rancher.com/rancher/rke2-runtime:v1.30.5-rke2r1 diff --git a/asciidoc/quickstart/elemental.adoc b/asciidoc/quickstart/elemental.adoc index aa944ad1..7251985c 100644 --- a/asciidoc/quickstart/elemental.adoc +++ b/asciidoc/quickstart/elemental.adoc @@ -90,7 +90,7 @@ helm install rancher rancher-prime/rancher \ --set hostname= \ --set replicas=1 \ --set bootstrapPassword= \ - --version 2.9.1 + --version 2.9.3 ---- [NOTE] From f484e48f2fece71d9e3af961af59f201c789913d Mon Sep 17 00:00:00 2001 From: Steven Hardy Date: Tue, 12 Nov 2024 16:41:40 +0000 Subject: [PATCH 6/8] Update metal3 to 0.8.3 (cherry picked from commit 86d2798d3433b5aac37a63499b1f305dc7f048f1) --- asciidoc/edge-book/version-matrix.adoc | 2 +- asciidoc/product/atip-lifecycle.adoc | 2 +- asciidoc/product/atip-management-cluster.adoc | 6 +++--- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/asciidoc/edge-book/version-matrix.adoc b/asciidoc/edge-book/version-matrix.adoc index 3627ece2..14dbe843 100644 --- a/asciidoc/edge-book/version-matrix.adoc +++ b/asciidoc/edge-book/version-matrix.adoc @@ -21,7 +21,7 @@ endif::[] | Fleet | 0.10.4 | 104.1.0+up0.10.4 | K3s | 1.30.5 | N/A | RKE2 | 1.30.5 | N/A -| Metal^3^ | 1.16.0 | 0.8.1 +| Metal^3^ | 1.16.0 | 0.8.3 | MetalLB | 0.14.9 | 0.14.9 | Elemental | 1.6.4 | 104.2.0+up1.6.4 | Edge Image Builder | 1.1.0 | N/A diff --git a/asciidoc/product/atip-lifecycle.adoc b/asciidoc/product/atip-lifecycle.adoc index 46524d21..af5d4765 100644 --- a/asciidoc/product/atip-lifecycle.adoc +++ b/asciidoc/product/atip-lifecycle.adoc @@ -38,7 +38,7 @@ helm get values metal3 -n metal3-system -o yaml > metal3-values.yaml helm upgrade metal3 suse-edge/metal3 \ --namespace metal3-system \ -f metal3-values.yaml \ - --version=0.8.1 + --version=0.8.3 ---- === Downstream cluster upgrades diff --git a/asciidoc/product/atip-management-cluster.adoc b/asciidoc/product/atip-management-cluster.adoc index 6a6da040..96f6b675 100644 --- a/asciidoc/product/atip-management-cluster.adoc +++ b/asciidoc/product/atip-management-cluster.adoc @@ -182,7 +182,7 @@ kubernetes: createNamespace: true installationNamespace: kube-system - name: metal3-chart - version: 0.8.1 + version: 0.8.3 repositoryName: suse-edge-charts targetNamespace: metal3-system createNamespace: true @@ -311,7 +311,7 @@ kubernetes: createNamespace: true installationNamespace: kube-system - name: metal3-chart - version: 0.8.1 + version: 0.8.3 repositoryName: suse-edge-charts targetNamespace: metal3-system createNamespace: true @@ -1022,7 +1022,7 @@ kubernetes: createNamespace: true installationNamespace: kube-system - name: metal3-chart - version: 0.8.1 + version: 0.8.3 repositoryName: suse-edge-charts targetNamespace: metal3-system createNamespace: true From d723146c38a6cd888717c222c5a9cbdce579069f Mon Sep 17 00:00:00 2001 From: Steven Hardy Date: Thu, 14 Nov 2024 11:56:05 +0000 Subject: [PATCH 7/8] Update rancher-turtles to 0.3.3 (cherry picked from commit 40990dbe38f3dfef0e6fa38ef545d0db4a44eb11) --- asciidoc/product/atip-management-cluster.adoc | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/asciidoc/product/atip-management-cluster.adoc b/asciidoc/product/atip-management-cluster.adoc index 96f6b675..5d83f3ea 100644 --- a/asciidoc/product/atip-management-cluster.adoc +++ b/asciidoc/product/atip-management-cluster.adoc @@ -189,7 +189,7 @@ kubernetes: installationNamespace: kube-system valuesFile: metal3.yaml - name: rancher-turtles-chart - version: 0.3.2 + version: 0.3.3 repositoryName: suse-edge-charts targetNamespace: rancher-turtles-system createNamespace: true @@ -318,7 +318,7 @@ kubernetes: installationNamespace: kube-system valuesFile: metal3.yaml - name: rancher-turtles-chart - version: 0.3.2 + version: 0.3.3 repositoryName: suse-edge-charts targetNamespace: rancher-turtles-system createNamespace: true @@ -1029,14 +1029,14 @@ kubernetes: installationNamespace: kube-system valuesFile: metal3.yaml - name: rancher-turtles-chart - version: 0.3.2 + version: 0.3.3 repositoryName: suse-edge-charts targetNamespace: rancher-turtles-system createNamespace: true installationNamespace: kube-system valuesFile: turtles.yaml - name: rancher-turtles-airgap-resources-chart - version: 0.3.2 + version: 0.3.3 repositoryName: suse-edge-charts targetNamespace: rancher-turtles-system createNamespace: true From 77c7e35628b6cd412e4e93969ddd076f54b7cc15 Mon Sep 17 00:00:00 2001 From: Steven Hardy Date: Thu, 14 Nov 2024 12:06:58 +0000 Subject: [PATCH 8/8] Update airgap image list for 3.1.1 Co-authored-by: Atanas Dinov (cherry picked from commit 1a314e9c7278bcc4d4c8f23cad90fa63f014e587) --- .../guides/air-gapped-eib-deployments.adoc | 63 +++++++++-------- asciidoc/product/atip-management-cluster.adoc | 67 ++++++++++--------- 2 files changed, 70 insertions(+), 60 deletions(-) diff --git a/asciidoc/guides/air-gapped-eib-deployments.adoc b/asciidoc/guides/air-gapped-eib-deployments.adoc index e004e73f..1c5182b1 100644 --- a/asciidoc/guides/air-gapped-eib-deployments.adoc +++ b/asciidoc/guides/air-gapped-eib-deployments.adoc @@ -243,56 +243,61 @@ kubernetes: url: https://charts.rancher.com/server-charts/prime embeddedArtifactRegistry: images: - - name: registry.rancher.com/rancher/backup-restore-operator:v5.0.1 - - name: registry.rancher.com/rancher/calico-cni:v3.28.0-rancher1 - - name: registry.rancher.com/rancher/cis-operator:v1.0.14 + - name: registry.rancher.com/rancher/backup-restore-operator:v5.0.2 + - name: registry.rancher.com/rancher/calico-cni:v3.28.1-rancher1 + - name: registry.rancher.com/rancher/cis-operator:v1.0.16 - name: registry.rancher.com/rancher/flannel-cni:v1.4.1-rancher1 - - name: registry.rancher.com/rancher/fleet-agent:v0.10.1 - - name: registry.rancher.com/rancher/fleet:v0.10.1 - - name: registry.rancher.com/rancher/hardened-addon-resizer:1.8.20-build20240410 - - name: registry.rancher.com/rancher/hardened-calico:v3.28.0-build20240625 - - name: registry.rancher.com/rancher/hardened-cluster-autoscaler:v1.8.10-build20240124 - - name: registry.rancher.com/rancher/hardened-cni-plugins:v1.4.1-build20240430 - - name: registry.rancher.com/rancher/hardened-coredns:v1.11.1-build20240305 - - name: registry.rancher.com/rancher/hardened-dns-node-cache:1.22.28-build20240125 - - name: registry.rancher.com/rancher/hardened-etcd:v3.5.13-k3s1-build20240531 - - name: registry.rancher.com/rancher/hardened-flannel:v0.25.4-build20240610 - - name: registry.rancher.com/rancher/hardened-k8s-metrics-server:v0.7.1-build20240401 - - name: registry.rancher.com/rancher/hardened-kubernetes:v1.30.5-rke2r1-build20240717 - - name: registry.rancher.com/rancher/hardened-multus-cni:v4.0.2-build20240612 - - name: registry.rancher.com/rancher/hardened-node-feature-discovery:v0.15.4-build20240513 - - name: registry.rancher.com/rancher/hardened-whereabouts:v0.7.0-build20240429 + - name: registry.rancher.com/rancher/fleet-agent:v0.10.4 + - name: registry.rancher.com/rancher/fleet:v0.10.4 + - name: registry.rancher.com/rancher/hardened-addon-resizer:1.8.20-build20240910 + - name: registry.rancher.com/rancher/hardened-calico:v3.28.1-build20240911 + - name: registry.rancher.com/rancher/hardened-cluster-autoscaler:v1.8.11-build20240910 + - name: registry.rancher.com/rancher/hardened-cni-plugins:v1.5.1-build20240910 + - name: registry.rancher.com/rancher/hardened-coredns:v1.11.1-build20240910 + - name: registry.rancher.com/rancher/hardened-dns-node-cache:1.23.1-build20240910 + - name: registry.rancher.com/rancher/hardened-etcd:v3.5.13-k3s1-build20240910 + - name: registry.rancher.com/rancher/hardened-flannel:v0.25.6-build20240910 + - name: registry.rancher.com/rancher/hardened-k8s-metrics-server:v0.7.1-build20240910 + - name: registry.rancher.com/rancher/hardened-kubernetes:v1.30.5-rke2r1-build20240912 + - name: registry.rancher.com/rancher/hardened-multus-cni:v4.1.0-build20240910 + - name: registry.rancher.com/rancher/hardened-node-feature-discovery:v0.15.6-build20240822 + - name: registry.rancher.com/rancher/hardened-whereabouts:v0.8.0-build20240910 - name: registry.rancher.com/rancher/helm-project-operator:v0.2.1 - name: registry.rancher.com/rancher/k3s-upgrade:v1.30.5-k3s1 - - name: registry.rancher.com/rancher/klipper-helm:v0.8.4-build20240523 - - name: registry.rancher.com/rancher/klipper-lb:v0.4.7 + - name: registry.rancher.com/rancher/klipper-helm:v0.9.2-build20240828 + - name: registry.rancher.com/rancher/klipper-lb:v0.4.9 - name: registry.rancher.com/rancher/kube-api-auth:v0.2.2 - name: registry.rancher.com/rancher/kubectl:v1.29.7 - name: registry.rancher.com/rancher/local-path-provisioner:v0.0.28 - - name: registry.rancher.com/rancher/machine:v0.15.0-rancher116 + - name: registry.rancher.com/rancher/machine:v0.15.0-rancher118 - name: registry.rancher.com/rancher/mirrored-cluster-api-controller:v1.7.3 - - name: registry.rancher.com/rancher/nginx-ingress-controller:v1.10.1-hardened1 + - name: registry.rancher.com/rancher/nginx-ingress-controller:v1.10.4-hardened3 - name: registry.rancher.com/rancher/prometheus-federator:v0.3.4 - name: registry.rancher.com/rancher/pushprox-client:v0.1.3-rancher2-client - name: registry.rancher.com/rancher/pushprox-proxy:v0.1.3-rancher2-proxy - name: registry.rancher.com/rancher/rancher-agent:v2.9.3 - name: registry.rancher.com/rancher/rancher-csp-adapter:v4.0.0 - - name: registry.rancher.com/rancher/rancher-webhook:v0.5.1 + - name: registry.rancher.com/rancher/rancher-webhook:v0.5.3 - name: registry.rancher.com/rancher/rancher:v2.9.3 - - name: registry.rancher.com/rancher/rke-tools:v0.1.100 - - name: registry.rancher.com/rancher/rke2-cloud-provider:v1.29.3-build20240515 + - name: registry.rancher.com/rancher/rke-tools:v0.1.103 + - name: registry.rancher.com/rancher/rke2-cloud-provider:v1.30.4-build20240910 - name: registry.rancher.com/rancher/rke2-runtime:v1.30.5-rke2r1 - name: registry.rancher.com/rancher/rke2-upgrade:v1.30.5-rke2r1 - - name: registry.rancher.com/rancher/security-scan:v0.2.16 - - name: registry.rancher.com/rancher/shell:v0.2.1 + - name: registry.rancher.com/rancher/security-scan:v0.2.18 + - name: registry.rancher.com/rancher/shell:v0.2.2 - name: registry.rancher.com/rancher/system-agent-installer-k3s:v1.30.5-k3s1 - name: registry.rancher.com/rancher/system-agent-installer-rke2:v1.30.5-rke2r1 - - name: registry.rancher.com/rancher/system-agent:v0.3.8-suc + - name: registry.rancher.com/rancher/system-agent:v0.3.10-suc - name: registry.rancher.com/rancher/system-upgrade-controller:v0.13.4 - - name: registry.rancher.com/rancher/ui-plugin-catalog:2.0.1 + - name: registry.rancher.com/rancher/ui-plugin-catalog:2.1.0 - name: registry.rancher.com/rancher/kubectl:v1.20.2 - name: registry.rancher.com/rancher/kubectl:v1.29.2 - name: registry.rancher.com/rancher/shell:v0.1.24 + - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v1.4.1 + - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v1.4.3 + - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 + - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20231011-8b53cabe0 + - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20231226-1a7112e06 ---- As compared to the full list of 600+ container images, this slimmed down version only contains ~60 which makes the new CRB image only about 7GB. diff --git a/asciidoc/product/atip-management-cluster.adoc b/asciidoc/product/atip-management-cluster.adoc index 5d83f3ea..fe535919 100644 --- a/asciidoc/product/atip-management-cluster.adoc +++ b/asciidoc/product/atip-management-cluster.adoc @@ -1085,56 +1085,61 @@ kubernetes: # type: server embeddedArtifactRegistry: images: - - name: registry.rancher.com/rancher/backup-restore-operator:v5.0.1 - - name: registry.rancher.com/rancher/calico-cni:v3.28.0-rancher1 - - name: registry.rancher.com/rancher/cis-operator:v1.0.14 + - name: registry.rancher.com/rancher/backup-restore-operator:v5.0.2 + - name: registry.rancher.com/rancher/calico-cni:v3.28.1-rancher1 + - name: registry.rancher.com/rancher/cis-operator:v1.0.16 - name: registry.rancher.com/rancher/flannel-cni:v1.4.1-rancher1 - - name: registry.rancher.com/rancher/fleet-agent:v0.10.1 - - name: registry.rancher.com/rancher/fleet:v0.10.1 - - name: registry.rancher.com/rancher/hardened-addon-resizer:1.8.20-build20240410 - - name: registry.rancher.com/rancher/hardened-calico:v3.28.0-build20240625 - - name: registry.rancher.com/rancher/hardened-cluster-autoscaler:v1.8.10-build20240124 - - name: registry.rancher.com/rancher/hardened-cni-plugins:v1.4.1-build20240430 - - name: registry.rancher.com/rancher/hardened-coredns:v1.11.1-build20240305 - - name: registry.rancher.com/rancher/hardened-dns-node-cache:1.22.28-build20240125 - - name: registry.rancher.com/rancher/hardened-etcd:v3.5.13-k3s1-build20240531 - - name: registry.rancher.com/rancher/hardened-flannel:v0.25.4-build20240610 - - name: registry.rancher.com/rancher/hardened-k8s-metrics-server:v0.7.1-build20240401 - - name: registry.rancher.com/rancher/hardened-kubernetes:v1.30.5-rke2r1-build20240717 - - name: registry.rancher.com/rancher/hardened-multus-cni:v4.0.2-build20240612 - - name: registry.rancher.com/rancher/hardened-node-feature-discovery:v0.15.4-build20240513 - - name: registry.rancher.com/rancher/hardened-whereabouts:v0.7.0-build20240429 + - name: registry.rancher.com/rancher/fleet-agent:v0.10.4 + - name: registry.rancher.com/rancher/fleet:v0.10.4 + - name: registry.rancher.com/rancher/hardened-addon-resizer:1.8.20-build20240910 + - name: registry.rancher.com/rancher/hardened-calico:v3.28.1-build20240911 + - name: registry.rancher.com/rancher/hardened-cluster-autoscaler:v1.8.11-build20240910 + - name: registry.rancher.com/rancher/hardened-cni-plugins:v1.5.1-build20240910 + - name: registry.rancher.com/rancher/hardened-coredns:v1.11.1-build20240910 + - name: registry.rancher.com/rancher/hardened-dns-node-cache:1.23.1-build20240910 + - name: registry.rancher.com/rancher/hardened-etcd:v3.5.13-k3s1-build20240910 + - name: registry.rancher.com/rancher/hardened-flannel:v0.25.6-build20240910 + - name: registry.rancher.com/rancher/hardened-k8s-metrics-server:v0.7.1-build20240910 + - name: registry.rancher.com/rancher/hardened-kubernetes:v1.30.5-rke2r1-build20240912 + - name: registry.rancher.com/rancher/hardened-multus-cni:v4.1.0-build20240910 + - name: registry.rancher.com/rancher/hardened-node-feature-discovery:v0.15.6-build20240822 + - name: registry.rancher.com/rancher/hardened-whereabouts:v0.8.0-build20240910 - name: registry.rancher.com/rancher/helm-project-operator:v0.2.1 - name: registry.rancher.com/rancher/k3s-upgrade:v1.30.5-k3s1 - - name: registry.rancher.com/rancher/klipper-helm:v0.8.4-build20240523 - - name: registry.rancher.com/rancher/klipper-lb:v0.4.7 + - name: registry.rancher.com/rancher/klipper-helm:v0.9.2-build20240828 + - name: registry.rancher.com/rancher/klipper-lb:v0.4.9 - name: registry.rancher.com/rancher/kube-api-auth:v0.2.2 - name: registry.rancher.com/rancher/kubectl:v1.29.7 - name: registry.rancher.com/rancher/local-path-provisioner:v0.0.28 - - name: registry.rancher.com/rancher/machine:v0.15.0-rancher116 + - name: registry.rancher.com/rancher/machine:v0.15.0-rancher118 - name: registry.rancher.com/rancher/mirrored-cluster-api-controller:v1.7.3 - - name: registry.rancher.com/rancher/nginx-ingress-controller:v1.10.1-hardened1 + - name: registry.rancher.com/rancher/nginx-ingress-controller:v1.10.4-hardened3 - name: registry.rancher.com/rancher/prometheus-federator:v0.3.4 - name: registry.rancher.com/rancher/pushprox-client:v0.1.3-rancher2-client - name: registry.rancher.com/rancher/pushprox-proxy:v0.1.3-rancher2-proxy - name: registry.rancher.com/rancher/rancher-agent:v2.9.3 - name: registry.rancher.com/rancher/rancher-csp-adapter:v4.0.0 - - name: registry.rancher.com/rancher/rancher-webhook:v0.5.1 + - name: registry.rancher.com/rancher/rancher-webhook:v0.5.3 - name: registry.rancher.com/rancher/rancher:v2.9.3 - - name: registry.rancher.com/rancher/rke-tools:v0.1.100 - - name: registry.rancher.com/rancher/rke2-cloud-provider:v1.29.3-build20240515 + - name: registry.rancher.com/rancher/rke-tools:v0.1.103 + - name: registry.rancher.com/rancher/rke2-cloud-provider:v1.30.4-build20240910 - name: registry.rancher.com/rancher/rke2-runtime:v1.30.5-rke2r1 - name: registry.rancher.com/rancher/rke2-upgrade:v1.30.5-rke2r1 - - name: registry.rancher.com/rancher/security-scan:v0.2.16 - - name: registry.rancher.com/rancher/shell:v0.2.1 + - name: registry.rancher.com/rancher/security-scan:v0.2.18 + - name: registry.rancher.com/rancher/shell:v0.2.2 - name: registry.rancher.com/rancher/system-agent-installer-k3s:v1.30.5-k3s1 - name: registry.rancher.com/rancher/system-agent-installer-rke2:v1.30.5-rke2r1 - - name: registry.rancher.com/rancher/system-agent:v0.3.8-suc + - name: registry.rancher.com/rancher/system-agent:v0.3.10-suc - name: registry.rancher.com/rancher/system-upgrade-controller:v0.13.4 - - name: registry.rancher.com/rancher/ui-plugin-catalog:2.0.1 + - name: registry.rancher.com/rancher/ui-plugin-catalog:2.1.0 - name: registry.rancher.com/rancher/kubectl:v1.20.2 - name: registry.rancher.com/rancher/kubectl:v1.29.2 - name: registry.rancher.com/rancher/shell:v0.1.24 + - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v1.4.1 + - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v1.4.3 + - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20230312-helm-chart-4.5.2-28-g66a760794 + - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20231011-8b53cabe0 + - name: registry.rancher.com/rancher/mirrored-ingress-nginx-kube-webhook-certgen:v20231226-1a7112e06 - name: registry.suse.com/rancher/mirrored-longhornio-csi-attacher:v4.6.1 - name: registry.suse.com/rancher/mirrored-longhornio-csi-provisioner:v4.0.1 - name: registry.suse.com/rancher/mirrored-longhornio-csi-resizer:v1.11.1 @@ -1150,8 +1155,8 @@ embeddedArtifactRegistry: - name: registry.suse.com/rancher/mirrored-longhornio-longhorn-ui:v1.7.1 - name: registry.suse.com/rancher/mirrored-longhornio-support-bundle-kit:v0.0.42 - name: registry.suse.com/rancher/mirrored-longhornio-longhorn-cli:v1.7.1 - - name: registry.suse.com/edge/3.1/cluster-api-provider-rke2-bootstrap:v0.7.0 - - name: registry.suse.com/edge/3.1/cluster-api-provider-rke2-controlplane:v0.7.0 + - name: registry.suse.com/edge/3.1/cluster-api-provider-rke2-bootstrap:v0.7.1 + - name: registry.suse.com/edge/3.1/cluster-api-provider-rke2-controlplane:v0.7.1 - name: registry.suse.com/edge/3.1/cluster-api-controller:v1.7.5 - name: registry.suse.com/edge/3.1/cluster-api-provider-metal3:v1.7.1 - name: registry.suse.com/edge/3.1/ip-address-manager:v1.7.1