You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -28,17 +28,37 @@ This document explains how to run the Linux networking scenario on ES2K.
28
28
29
29
Notes about topology:
30
30
31
-
- Four Kernel netdevs are created by default by loading IDPF driver during ACC bring-up. You can also create more than Four netdevs. For that, we need to modify `acc_apf` parameter under `num_default_vport` in `/etc/dpcp/cfg/cp_init.cfg` on IMC before starting `run_default_init_app`.
32
-
- In `/etc/dpcp/cfg/cp_init.cfg` file also modify default `sem_num_pages` value to the value mentioned in `/opt/p4/p4sde/share/mev_reference_p4_files/linux_networking/README_P4_CP_NWS`.
33
-
- vlan1, vlan2, .... vlanN created using Linux commands and are on top of an IDPF Netdev. These VLAN ports should be equal to number of VM's that are spawned.
34
-
- br-int, VxLAN ports are created using ovs-vsctl command provided by the networking recipe and all the vlan ports are attached to br-int using ovs-vsctl command.
31
+
- Four Kernel netdevs are created by default by loading IDPF driver during
32
+
ACC bring-up. You can also create more than Four netdevs. For that, we
33
+
need to modify `acc_apf` parameter under `num_default_vport` in
34
+
`/etc/dpcp/cfg/cp_init.cfg` on IMC before starting `run_default_init_app`.
35
35
36
-
System under test will have above topology running the networking recipe. Link Partner can have the networking recipe or legacy OvS or kernel VxLAN. Note the [Limitations](#limitations) section before setting up the topology.
36
+
- In `/etc/dpcp/cfg/cp_init.cfg` file also modify default `sem_num_pages`
- vlan1, vlan2, .... vlanN created using Linux commands and are on top of
41
+
an IDPF Netdev. These VLAN ports should be equal to number of VM's that
42
+
are spawned.
43
+
44
+
- br-int, VxLAN ports are created using ovs-vsctl command provided by the
45
+
networking recipe and all the vlan ports are attached to br-int using
46
+
ovs-vsctl command.
47
+
48
+
System under test will have above topology running the networking recipe.
49
+
Link Partner can have the networking recipe or legacy OvS or kernel VxLAN.
50
+
Note the [Limitations](#limitations) section before setting up the
51
+
topology.
37
52
38
53
## Create P4 artifacts and start Infrap4d process
39
54
40
-
- Use Linux networking p4 program present in the directory `/opt/p4/p4sde/share/mev_reference_p4_files/linux_networking` for this scenario.
41
-
- Refer to [Running Infrap4d on Intel IPU E2100](https://github.com/ipdk-io/networking-recipe/blob/main/docs/guides/es2k/running-infrap4d.md) for compiling `P4 artifacts`, `bringing up ACC` and running `infrap4d` on ACC.
55
+
- Use Linux networking p4 program present in the directory
56
+
`/opt/p4/p4sde/share/mev_reference_p4_files/linux_networking` for this
57
+
scenario.
58
+
59
+
- See [Running Infrap4d on Intel IPU E2100](/guides/es2k/running-infrap4d)
60
+
for compiling `P4 artifacts`, `bringing up ACC` and running `infrap4d` on
61
+
ACC.
42
62
43
63
## Creating the topology
44
64
@@ -50,27 +70,35 @@ Once the application is started, set the forwarding pipeline config using
Note: Assuming `linux_networking.pb.bin` and `linux_networking.p4info.txt` along with other P4 artifacts are created as per the steps mentioned in previous section.
77
+
Note: Assuming `linux_networking.pb.bin` and `linux_networking.p4info.txt`
78
+
along with other P4 artifacts are created as per the steps mentioned in
79
+
previous section.
57
80
58
81
### Configure VSI Group and add a netdev
59
82
60
-
Use one of the IPDF netdevs on ACC to receive all control packets from overlay VM's by assigning to a VSI group. VSI group 3 is dedicated for this configuration, execute below devmem commands on IMC.
83
+
Use one of the IPDF netdevs on ACC to receive all control packets from
84
+
overlay VM's by assigning to a VSI group. VSI group 3 is dedicated for
85
+
this configuration, execute below devmem commands on IMC.
61
86
62
87
```bash
63
88
# SEM_DIRECT_MAP_PGEN_CTRL: LSB 11-bit is for vsi which need to map into vsig
64
89
devmem 0x20292002a0 64 0x8000050000000008
65
90
66
-
# SEM_DIRECT_MAP_PGEN_DATA_VSI_GROUP : This will set vsi (set in SEM_DIRECT_MAP_PGEN_CTRL register LSB) into VSIG-3
91
+
# SEM_DIRECT_MAP_PGEN_DATA_VSI_GROUP : This will set vsi
92
+
# (set in SEM_DIRECT_MAP_PGEN_CTRL register LSB) into VSIG-3.
67
93
devmem 0x2029200388 64 0x3
68
94
69
95
# SEM_DIRECT_MAP_PGEN_CTRL: LSB 11-bit is for vsi which need to map into vsig
70
96
devmem 0x20292002a0 64 0xA000050000000008
71
97
```
72
98
73
-
Note: Here VSI 8 has been used for receiving all control packets and added to VSI group 3. This refers to HOST netdev VSIG 3 as per the topology diagram. Modify this VSI based on your configuration.
99
+
Note: Here VSI 8 has been used for receiving all control packets and added
100
+
to VSI group 3. This refers to HOST netdev VSIG 3 as per the topology
101
+
diagram. Modify this VSI based on your configuration.
74
102
75
103
### Create Overlay network
76
104
@@ -89,8 +117,9 @@ ip -6 addr add 9::2/64 <Netdev connected to VF2>
89
117
ifconfig <Netdev connected to VF> up
90
118
```
91
119
92
-
Option 2: If we are unable to spawn VM's on top of the VF's, for this use case we can also leverage kernel network namespaces.
93
-
Move each VF to a network namespace and assign IP addresses
120
+
Option 2: If we are unable to spawn VM's on top of the VF's, for this use
121
+
case we can also leverage kernel network namespaces. Move each VF to a
122
+
network namespace and assign IP addresses
94
123
95
124
```bash
96
125
ip netns add VM0
@@ -106,30 +135,32 @@ ip netns exec VM1 ifconfig <VF2 port> up
106
135
107
136
### Start OvS as a separate process
108
137
109
-
Legacy OvS is used as a control plane for source MAC learning of overlay VM's. OvS should be started as a seperate process.
138
+
Legacy OvS is used as a control plane for source MAC learning of overlay
139
+
VM's. OvS should be started as a seperate process.
alias ovs-vsctl="ovs-vsctl --db unix:$RUN_OVS/var/run/openvswitch/db.sock"
135
166
ovs-vsctl set Open_vSwitch . other_config:n-revalidator-threads=1
@@ -140,7 +171,9 @@ ovs-vsctl show
140
171
141
172
### Create VLAN representers
142
173
143
-
For each VM that is spawned for overlay network we need to have a port representer. We create VLAN netdevs on top of the IPDF netdev which is assigned to VSI group 3 in step-2 mentioned above.
174
+
For each VM that is spawned for overlay network we need to have a port
175
+
representer. We create VLAN netdevs on top of the IPDF netdev which is
176
+
assigned to VSI group 3 in step-2 mentioned above.
144
177
145
178
```bash
146
179
ip link add link <VSI 8> name vlan1 type vlan id 1
@@ -149,10 +182,11 @@ ifconfig vlan1 up
149
182
ifconfig vlan2 up
150
183
```
151
184
152
-
Note: Here the assumption is, we have created 2 overlay VM's and creating 2 port representers for those VM's.
153
-
Port representer should always be in the format: `lowercase string 'vlan'+'vlanID'`
185
+
Note: Here the assumption is, we have created 2 overlay VM's and creating
186
+
2 port representers for those VM's. Port representer should always be in
187
+
the format: `lowercase string 'vlan'+'vlanID'`
154
188
155
-
### Create intergation bridge and add ports to the bridge
189
+
### Create integration bridge and add ports to the bridge
156
190
157
191
Create OvS bridge, VxLAN tunnel and assign ports to the bridge.
Note: Here we are creating VxLAN tunnel with VNI 0, you can create any VNI for tunneling.
206
+
Note: Here we are creating VxLAN tunnel with VNI 0, you can create any VNI
207
+
for tunneling.
172
208
173
209
### Configure rules for overlay control packets
174
210
175
-
Configure rules to send overlay control packets from a VM to its respective port representers.
211
+
Configure rules to send overlay control packets from a VM to its
212
+
respective port representers.
176
213
177
214
Below configuration assumes
178
215
179
216
- Overlay VF1 has a VSI value 14
180
217
- Overlay VF2 has a VSI value 15
181
218
182
-
These VSI values can be checked with `/usr/bin/cli_client -q -c` command on IMC. This command provides VSI ID, Vport ID, and corresponding MAC addresses for all
219
+
These VSI values can be checked with `/usr/bin/cli_client -q -c` command
220
+
on IMC. This command provides VSI ID, Vport ID, and corresponding MAC
221
+
addresses for all:
183
222
184
223
- IDPF netdevs on ACC
185
224
- VF's on HOST
186
225
- IDPF netdevs on HOST (if IDPF driver loaded by you on HOST)
187
226
- Netdevs on IMC
188
227
189
228
```bash
190
-
191
-
# Rules for control packets coming from overlay VF(VSI-14), IPU will add a VLAN tag 1 and send to HOST1(VSI-8)
0 commit comments