You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/nw-cluster-mtu-change.adoc
+46-55Lines changed: 46 additions & 55 deletions
Original file line number
Diff line number
Diff line change
@@ -23,8 +23,7 @@ ifndef::outposts[= Changing the cluster network MTU]
23
23
ifdef::outposts[= Changing the cluster network MTU to support AWS Outposts]
24
24
25
25
ifdef::outposts[]
26
-
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster.
27
-
You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
26
+
During installation, the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You might need to decrease the MTU value for the cluster network to support an AWS Outposts subnet.
28
27
endif::outposts[]
29
28
30
29
ifndef::outposts[As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster.]
. Prepare your configuration for the hardware MTU:
75
-
76
-
** If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
74
+
+
75
+
.. If your hardware MTU is specified with DHCP, update your DHCP configuration such as with the following dnsmasq configuration:
77
76
+
78
77
[source,text]
79
78
----
80
-
dhcp-option-force=26,<mtu>
79
+
dhcp-option-force=26,<mtu> <1>
81
80
----
81
+
<1> Where `<mtu>` specifies the hardware MTU for the DHCP server to advertise.
82
+
+
83
+
.. If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
84
+
+
85
+
.. If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
82
86
+
83
-
--
84
-
where:
85
-
86
-
`<mtu>`:: Specifies the hardware MTU for the DHCP server to advertise.
87
-
--
88
-
89
-
** If your hardware MTU is specified with a kernel command line with PXE, update that configuration accordingly.
90
-
91
-
** If your hardware MTU is specified in a NetworkManager connection configuration, complete the following steps. This approach is the default for {product-title} if you do not explicitly specify your network configuration with DHCP, a kernel command line, or some other method. Your cluster nodes must all use the same underlying network configuration for the following procedure to work unmodified.
92
-
93
87
... Find the primary network interface by entering the following command:
94
-
95
88
+
96
89
[source,terminal]
97
90
----
98
-
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0
91
+
$ oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0 <1>
99
92
----
93
+
<1> where `<node_name>` specifies the name of a node in your cluster.
100
94
+
101
-
--
102
-
where:
103
-
104
-
`<node_name>`:: Specifies the name of a node in your cluster.
105
-
--
106
-
107
-
... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file:
95
+
... Create the following NetworkManager configuration in the `<interface>-mtu.conf` file.
108
96
+
109
97
.Example NetworkManager connection configuration
110
98
[source,ini]
111
99
----
112
100
[connection-<interface>-mtu]
113
-
match-device=interface-name:<interface>
114
-
ethernet.mtu=<mtu>
101
+
match-device=interface-name:<interface> <1>
102
+
ethernet.mtu=<mtu> <2>
115
103
----
104
+
<1> Where `<interface>` specifies the primary network interface name.
105
+
<2> Where `<mtu>` specifies the new hardware MTU value.
116
106
+
117
-
--
118
-
where:
119
-
120
-
`<mtu>`:: Specifies the new hardware MTU value.
121
-
`<interface>`:: Specifies the primary network interface name.
122
-
--
107
+
[NOTE]
108
+
====
109
+
For nodes that use 2 network interface controller (NIC) bonds, specify the primary NIC bond and the secondary NIC Bond `<interface>-mtu.conf` file.
123
110
124
-
... Create two `MachineConfig` objects, one for the control plane nodes and another for the worker nodes in your cluster:
111
+
.Example NetworkManager connection configuration
112
+
[source,ini]
113
+
----
114
+
[connection-<primary-bond-interface>-mtu]
115
+
match-device=interface-name:<bond-iface-name>
116
+
ethernet.mtu=9000
125
117
126
-
.... Create the following Butane config in the `control-plane-interface.bu` file:
. Update the underlying network interface MTU value:
272
-
269
+
+
273
270
** If you are specifying the new MTU with a NetworkManager connection configuration, enter the following command. The MachineConfig Operator automatically performs a rolling reboot of the nodes in your cluster.
274
271
+
275
272
[source,terminal]
@@ -278,7 +275,7 @@ $ for manifest in control-plane-interface worker-interface; do
278
275
oc create -f $manifest.yaml
279
276
done
280
277
----
281
-
278
+
+
282
279
** If you are specifying the new MTU with a DHCP server option or a kernel command line and PXE, make the necessary changes for your infrastructure.
283
280
284
281
. As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
@@ -325,10 +322,9 @@ Verify that the following statements are true:
325
322
+
326
323
[source,terminal]
327
324
----
328
-
$ oc get machineconfig <config_name> -o yaml | grep path:
where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
327
+
<1> Where `<config_name>` is the name of the machine config from the `machineconfiguration.openshift.io/currentConfig` field.
332
328
+
333
329
If the machine config is successfully deployed, the previous output contains the `/etc/NetworkManager/conf.d/99-<interface>-mtu.conf` file path and the `ExecStart=/usr/local/bin/mtu-migration.sh` line.
`<mtu>`:: Specifies the new cluster network MTU that you specified with `<overlay_to>`.
349
-
--
340
+
<1> Where: `<mtu>` specifies the new cluster network MTU that you specified with `<overlay_to>`.
350
341
351
342
. After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command:
Copy file name to clipboardExpand all lines: networking/changing-cluster-network-mtu.adoc
+3Lines changed: 3 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,10 @@ toc::[]
9
9
[role="_abstract"]
10
10
As a cluster administrator, you can change the MTU for the cluster network after cluster installation. This change is disruptive as cluster nodes must be rebooted to finalize the MTU change.
0 commit comments