Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release-3.1] 3.1.1 backports and release notes #494

Merged
merged 8 commits into from
Nov 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion asciidoc/components/longhorn.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -264,7 +264,7 @@ image:
arch: x86_64
outputImageName: eib-image.iso
kubernetes:
version: v1.30.3+rke2r1
version: v1.30.5+rke2r1
helm:
charts:
- name: longhorn
Expand Down
6 changes: 3 additions & 3 deletions asciidoc/components/virtualization.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -66,9 +66,9 @@ This should show something similar to the following:
[,shell]
----
NAME STATUS ROLES AGE VERSION
node1.edge.rdo.wales Ready control-plane,etcd,master 4h20m v1.30.3+rke2r1
node2.edge.rdo.wales Ready control-plane,etcd,master 4h15m v1.30.3+rke2r1
node3.edge.rdo.wales Ready control-plane,etcd,master 4h15m v1.30.3+rke2r1
node1.edge.rdo.wales Ready control-plane,etcd,master 4h20m v1.30.5+rke2r1
node2.edge.rdo.wales Ready control-plane,etcd,master 4h15m v1.30.5+rke2r1
node3.edge.rdo.wales Ready control-plane,etcd,master 4h15m v1.30.5+rke2r1
----

Now you can proceed to install the *KubeVirt* and *Containerized Data Importer (CDI)* Helm charts:
Expand Down
126 changes: 80 additions & 46 deletions asciidoc/day2/downstream-cluster-helm.adoc

Large diffs are not rendered by default.

107 changes: 65 additions & 42 deletions asciidoc/day2/downstream-cluster-k8s.adoc

Large diffs are not rendered by default.

103 changes: 65 additions & 38 deletions asciidoc/day2/downstream-cluster-os.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@ A different link:https://www.freedesktop.org/software/systemd/man/latest/systemd

** First a link:https://en.opensuse.org/SDB:Zypper_usage#Updating_packages[normal package upgrade]. Done in order to ensure that all packages are with the latest version before the migration. Mitigating any failures related to old package version.

** After that it proceeds with the OS migration process by utilising the `zypper migration` command.
** After that it proceeds with the OS migration process by utilizing the `zypper migration` command.

Shipped through a *SUC plan*, which should be located on each *downstream cluster* that is in need of a OS upgrade.
Shipped through a *SUC plan*, which should be located on each *downstream cluster* that is in need of an OS upgrade.

=== Requirements

Expand Down Expand Up @@ -59,7 +59,7 @@ An example of defining custom tolerations for the *control-plane* SUC Plan, woul
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: cp-os-upgrade-edge-3XX
name: os-upgrade-control-plane
spec:
...
tolerations:
Expand All @@ -85,7 +85,7 @@ spec:

_Air-gapped:_

. *Mirror SUSE RPM repositories* - OS RPM repositories should be locally mirrored so that `os-pkg-update.service/os-migration.service` can have access to them. This can be achieved using link:https://github.com/SUSE/rmt[RMT].
. *Mirror SUSE RPM repositories* - OS RPM repositories should be locally mirrored so that `os-pkg-update.service/os-migration.service` can have access to them. This can be achieved by using either link:https://documentation.suse.com/sles/15-SP6/html/SLES-all/book-rmt.html[RMT] or link:https://documentation.suse.com/suma/5.0/en/suse-manager/index.html[SUMA].

=== Update procedure

Expand All @@ -94,7 +94,25 @@ _Air-gapped:_
This section assumes you will be deploying the `OS upgrade` *SUC Plan* using <<components-fleet,Fleet>>. If you intend to deploy the *SUC Plan* using a different approach, refer to <<os-upgrade-suc-plan-deployment-third-party>>.
====

The `OS upgrade procedure` revolves around deploying *SUC Plans* to downstream clusters. These plans then hold information about how and on which nodes to deploy the `os-pkg-update.service/os-migration.service`. For information regarding the structure of a *SUC Plan*, refer to the https://github.com/rancher/system-upgrade-controller?tab=readme-ov-file#example-plans[upstream] documentation.
[IMPORTANT]
====
For environments previously upgraded using this procedure, users should ensure that *one* of the following steps is completed:

* `Remove any previously deployed SUC Plans related to older Edge release versions from the downstream cluster` - can be done by removing the desired _downstream_ cluster from the existing `GitRepo/Bundle` target configuration, or removing the `GitRepo/Bundle` resource altogether.

* `Reuse the existing GitRepo/Bundle resource` - can be done by pointing the resource's revision to a new tag that holds the correct fleets for the desired `suse-edge/fleet-examples` link:https://github.com/suse-edge/fleet-examples/releases[release].

This is done in order to avoid clashes between `SUC Plans` for older Edge release versions.

If users attempt to upgrade, while there are existing `SUC Plans` on the _downstream_ cluster, they will see the following fleet error:

[,bash]
----
Not installed: Unable to continue with install: Plan <plan_name> in namespace <plan_namespace> exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error..
----
====

The `OS upgrade procedure` revolves around deploying *SUC Plans* to downstream clusters. These plans hold information about how and on which nodes to deploy the `os-pkg-update.service/os-migration.service`. For information regarding the structure of a *SUC Plan*, refer to the https://github.com/rancher/system-upgrade-controller?tab=readme-ov-file#example-plans[upstream] documentation.

`OS upgrade` SUC Plans are shipped in the following ways:

Expand All @@ -109,7 +127,7 @@ For a full overview of what happens during the _upgrade procedure_, refer to the
[#os-update-overview]
==== Overview

This section aims to describe the full workflow that the *_OS upgrade process_* goes throught from start to finish.
This section aims to describe the full workflow that the *_OS upgrade process_* goes through from start to finish.

.OS upgrade workflow
image::day2_os_pkg_update_diagram.png[]
Expand All @@ -130,9 +148,9 @@ OS upgrade steps:

. `Fleet` then proceeds to deploy the `Kubernetes resources` from this *Bundle* to all the targeted `downstream clusters`. In the context of `OS upgrades`, Fleet deploys the following resources from the *Bundle*:

.. *Agent SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_agent_* nodes. It is *not* interpreted if the cluster consists only from _control-plane_ nodes. It executes after all control-plane *SUC* plans have completed successfully.
.. *Worker SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_worker_* nodes. It is *not* interpreted if the cluster consists only from _control-plane_ nodes. It executes after all control-plane *SUC* plans have completed successfully.

.. *Control-plane SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_control-plane_* nodes.
.. *Control Plane SUC Plan* - instructs *SUC* on how to do an OS upgrade on cluster *_control-plane_* nodes.

.. *Script Secret* - referenced in each *SUC Plan*; ships an `upgrade.sh` script responsible for creating the `os-pkg-update.service/os-migration.service` which will do the actual OS upgrade.

Expand All @@ -155,15 +173,15 @@ The above resources will be deployed in the `cattle-system` namespace of each do

... For `os-pkg-update.service`:

.... Update all package version on the node OS, by running `transactional-update cleanup up`
.... Update all package versions on the node OS, by running `transactional-update cleanup up`

.... After a successful `transactional-update`, schedule a system *reboot* so that the package version updates can take effect

... For `os-migration.service`:

.... Update all package version on the node OS, by running `transactional-update cleanup up`. This is done to ensure that no old package versions causes an OS migration error.
.... Update all package versions on the node OS, by running `transactional-update cleanup up`. This is done to ensure that no old package versions cause an OS migration error.

.... Proceed to migrate the OS to the desired values. Migration is done by utilising the `zypper migration` command.
.... Proceed to migrate the OS to the desired values. Migration is done by utilizing the `zypper migration` command.

.... Schedule a system *reboot* so that the migration can take effect

Expand Down Expand Up @@ -192,37 +210,32 @@ Once deployed, to monitor the OS upgrade process of the nodes of your targeted c
[#os-upgrade-suc-plan-deployment-git-repo-rancher]
===== GitRepo creation - Rancher UI

. In the upper left corner, *☰ -> Continuous Delivery*
To create a `GitRepo` resource through the Rancher UI, follow their official link:https://ranchermanager.docs.rancher.com/integrations-in-rancher/fleet/overview#accessing-fleet-in-the-rancher-ui[documentation].

. Go to *Git Repos -> Add Repository*
The Edge team maintains a ready to use link:https://github.com/suse-edge/fleet-examples/tree/release-3.1.1/fleets/day2/system-upgrade-controller-plans/os-upgrade[fleet] that users can add as a `path` for their GitRepo resource.

If you use the `suse-edge/fleet-examples` repository:

. *Repository URL* - `https://github.com/suse-edge/fleet-examples.git`

. *Watch -> Revision* - choose a link:https://github.com/suse-edge/fleet-examples/releases[release] tag for the `suse-edge/fleet-examples` repository that you wish to use

. Under *Paths* add the path to the OS upgrade Fleets that you wish to use - `fleets/day2/system-upgrade-controller-plans/os-upgrade`
[IMPORTANT]
====
Always use this fleet from a valid Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag.
====

. Select *Next* to move to the *target* configuration section. *Only select clusters whose node's packages you wish to upgrade*
For use-cases where no custom tolerations need to be included to the `SUC plans` that the fleet ships, users can directly refer the `os-upgrade` fleet from the `suse-edge/fleet-examples` repository.

. *Create*
In cases where custom tolerations are needed, users should refer the `os-upgrade` fleet from a separate repository, allowing them to add the tolerations to the SUC plans as required.

Alternatively, if you decide to use your own repository to host these files, you would need to provide your repo data above.
An example of how a `GitRepo` can be configured to use the fleet from the `suse-edge/fleet-examples` repository, can be viewed link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/gitrepos/day2/os-upgrade-gitrepo.yaml[here].

[#os-upgrade-suc-plan-deployment-git-repo-manual]
===== GitRepo creation - manual

. Choose the desired Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to apply the OS *SUC update Plans* from (referenced below as `$\{REVISION\}`).

. Pull the *GitRepo* resource:
+
[,bash]
----
curl -o os-update-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/gitrepos/day2/os-update-gitrepo.yaml
curl -o os-upgrade-gitrepo.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/gitrepos/day2/os-upgrade-gitrepo.yaml
----

. Edit the *GitRepo* configuration, under `spec.targets` specify your desired target list. By default the `GitRepo` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any down stream clusters.
. Edit the *GitRepo* configuration, under `spec.targets` specify your desired target list. By default the `GitRepo` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any downstream clusters.

** To match all clusters change the default `GitRepo` *target* to:
+
Expand All @@ -240,7 +253,7 @@ spec:
+
[,bash]
----
kubectl apply -f os-update-gitrepo.yaml
kubectl apply -f os-upgrade-gitrepo.yaml
----

. View the created *GitRepo* resource under the `fleet-default` namespace:
Expand All @@ -251,7 +264,7 @@ kubectl get gitrepo os-upgrade -n fleet-default

# Example output
NAME REPO COMMIT BUNDLEDEPLOYMENTS-READY STATUS
os-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.0 0/0
os-upgrade https://github.com/suse-edge/fleet-examples.git release-3.1.1 0/0
----

[#os-upgrade-suc-plan-deployment-bundle]
Expand All @@ -268,17 +281,31 @@ Once deployed, to monitor the OS upgrade process of the nodes of your targeted c
[#os-upgrade-suc-plan-deployment-bundle-rancher]
===== Bundle creation - Rancher UI

The Edge team maintains a ready to use link:https://github.com/suse-edge/fleet-examples/blob/release-3.1.1/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml[bundle] that can be used in the below steps.

[IMPORTANT]
====
Always use this bundle from a valid Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag.
====

To create a bundle through Rancher's UI:

. In the upper left corner, click *☰ -> Continuous Delivery*

. Go to *Advanced* > *Bundles*

. Select *Create from YAML*

. From here you can create the Bundle in one of the following ways:
+
[NOTE]
====
There might be use-cases where you would need to include custom tolerations to the `SUC plans` that the bundle ships. Make sure to include those tolerations in the bundle that will be generated by the below steps.
====

.. By manually copying the *Bundle* content to the *Create from YAML* page. Content can be retrieved from https://raw.githubusercontent.com/suse-edge/fleet-examples/$\{REVISION\}/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml, where `$\{REVISION\}` is the Edge link:https://github.com/suse-edge/fleet-examples/releases[release] that you are using
.. By manually copying the link:https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml[bundle content] from `suse-edge/fleet-examples` to the *Create from YAML* page.

.. By cloning the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository to the desired link:https://github.com/suse-edge/fleet-examples/releases[release] tag and selecting the *Read from File* option in the *Create from YAML* page. From there, navigate to `bundles/day2/system-upgrade-controller-plans/os-upgrade` directory and select `os-upgrade-bundle.yaml`. This will auto-populate the *Create from YAML* page with the Bundle content.
.. By cloning the link:https://github.com/suse-edge/fleet-examples.git[suse-edge/fleet-examples] repository from the desired link:https://github.com/suse-edge/fleet-examples/releases[release] tag and selecting the *Read from File* option in the *Create from YAML* page. From there, navigate to the bundle location (`bundles/day2/system-upgrade-controller-plans/os-upgrade`) and select the bundle file. This will auto-populate the *Create from YAML* page with the bundle content.

. Change the *target* clusters for the `Bundle`:

Expand All @@ -298,16 +325,14 @@ spec:
[#os-upgrade-suc-plan-deployment-bundle-manual]
===== Bundle creation - manual

. Choose the desired Edge link:https://github.com/suse-edge/fleet-examples/releases[release] tag that you wish to apply the OS upgrade *SUC Plans* from (referenced below as `$\{REVISION\}`).

. Pull the *Bundle* resource:
+
[,bash]
----
curl -o os-upgrade-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/${REVISION}/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml
curl -o os-upgrade-bundle.yaml https://raw.githubusercontent.com/suse-edge/fleet-examples/refs/tags/release-3.1.1/bundles/day2/system-upgrade-controller-plans/os-upgrade/os-upgrade-bundle.yaml
----

. Edit the `Bundle` *target* configurations, under `spec.targets` provide your desired target list. By default the `Bundle` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any down stream clusters.
. Edit the `Bundle` *target* configurations, under `spec.targets` provide your desired target list. By default the `Bundle` resources from the `suse-edge/fleet-examples` are *NOT* mapped to any downstream clusters.

** To match all clusters change the default `Bundle` *target* to:
+
Expand Down Expand Up @@ -343,11 +368,13 @@ To get the OS upgrade resources that you need, first determine the Edge link:htt

After that, resources can be found at `fleets/day2/system-upgrade-controller-plans/os-upgrade`, where:

* `plan-control-plane.yaml` - `system-upgrade-controller` Plan resource for *control-plane* nodes
* `plan-control-plane.yaml` - `system-upgrade-controller` Plan resource for *control-plane* nodes.

* `plan-worker.yaml` - `system-upgrade-controller` Plan resource for *worker* nodes.

* `plan-agent.yaml` - `system-upgrade-controller` Plan resource for *agent* nodes
* `secret.yaml` - secret that ships the `upgrade.sh` script.

* `secret.yaml` - secret that ships a script that creates the `os-pkg-update.service/os-migration.service` link:https://www.freedesktop.org/software/systemd/man/latest/systemd.service.html[systemd.service]
* `config-map.yaml` - ConfigMap that provides upgrade configurations that are consumed by the `upgrade.sh` script.

[IMPORTANT]
====
Expand Down
12 changes: 7 additions & 5 deletions asciidoc/day2/downstream-clusters-introduction.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,13 @@ This section is meant to be a *starting point* for the `Day 2` operations docume
[#day2-downstream-components]
=== Components

Below you can find a description of the default components that should be setup on either your `management cluster` or your `downstream clusters` so that you can successfully perform `Day 2` operations.
Below you can find a description of the default components that should be set up on either your `management cluster` or your `downstream clusters` so that you can successfully perform `Day 2` operations.

==== Rancher

[NOTE]
====
For use-cases where you want to utilise <<components-fleet,Fleet>> without Rancher, you can skip the Rancher component all together.
For use-cases where you want to utilize <<components-fleet,Fleet>> without Rancher, you can skip the Rancher component altogether.
====

Responsible for the management of your `downstream clusters`. Should be deployed on your `management cluster`.
Expand All @@ -56,7 +56,9 @@ For use-cases, where a third party GitOps tool usage is desired, see:

. For `Kubernetes distribution upgrades` - <<k8s-upgrade-suc-plan-deployment-third-party>>

. For `Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <<release-notes>> page and populate the chart version and URL in your third party GitOps tool
. For `EIB deployed Helm chart upgrades` - <<day2-helm-upgrade-eib-chart-third-party>>

. For `non-EIB deployed Helm chart upgrades` - retrieve the chart version supported by the desired Edge release from the <<release-notes>> page and populate the chart version and URL in your third party GitOps tool
====

==== System Upgrade Controller (SUC)
Expand All @@ -83,15 +85,15 @@ Below you can find more information regarding what these resources do and for wh

A `GitRepo` is a <<components-fleet, Fleet>> resource that represents a Git repository from which `Fleet` can create `Bundles`. Each `Bundle` is created based on configuration paths defined inside of the `GitRepo` resource. For more information, see the https://fleet.rancher.io/gitrepo-add[GitRepo] documentation.

In terms of `Day 2` operations `GitRepo` resources are normally used to deploy `SUC` or `SUC Plans` on *non air-gapped* environments that utilise a _Fleet GitOps_ approach.
In terms of `Day 2` operations, `GitRepo` resources are normally used to deploy `SUC` or `SUC Plans` on *non air-gapped* environments that utilize a _Fleet GitOps_ approach.

Alternatively, `GitRepo` resources can also be used to deploy `SUC` or `SUC Plans` on *air-gapped* environments, *if you mirror your repository setup through a local git server*.

==== Bundle

`Bundles` hold *raw* Kubernetes resources that will be deployed on the targeted cluster. Usually they are created from a `GitRepo` resource, but there are use-cases where they can be deployed manually. For more information refer to the https://fleet.rancher.io/bundle-add[Bundle] documentation.

In terms of `Day 2` operations `Bundle` resources are normally used to deploy `SUC` or `SUC Plans` on *air-gapped* environments that do not use some form of _local GitOps_ procedure (e.g. a *local git server*).
In terms of `Day 2` operations, `Bundle` resources are normally used to deploy `SUC` or `SUC Plans` on *air-gapped* environments that do not use some form of _local GitOps_ procedure (e.g. a *local git server*).

Alternatively, if your use-case does not allow for a _GitOps_ workflow (e.g. using a Git repository), *Bundle* resources could also be used to deploy `SUC` or `SUC Plans` on *non air-gapped* environments.

Expand Down
Loading