diff --git a/p4src/Inline_IPsec/README.md b/p4src/Inline_IPsec/README.md
index 38221816..9ed18177 100644
--- a/p4src/Inline_IPsec/README.md
+++ b/p4src/Inline_IPsec/README.md
@@ -1,21 +1,14 @@
+Copyright 2022-2023 Intel Corporation.
+SPDX-License-Identifier: Apache-2.0
+-->
-This is reference P4 program for Inline IPsec recipe.
+Inline IPsec Recipe
+===================
+
+Reference P4 program for the Inline IPsec recipe.
+
+See the [IPsec Offload](https://ipdk.io/p4cp-userguide/apps/ipsec-offload.html)
+section of the
+[P4 Control Plane User Guide](https://ipdk.io/p4cp-userguide/index.html)
+for more information.
diff --git a/p4src/linux_networking/README.md b/p4src/linux_networking/README.md
new file mode 100644
index 00000000..81eb7a63
--- /dev/null
+++ b/p4src/linux_networking/README.md
@@ -0,0 +1,13 @@
+
+
+Linux Networking Recipe
+=======================
+
+See the
+[P4 Control Plane User Guide](https://ipdk.io/p4cp-userguide/index.html)
+for documentation of the
+[Linux Networking](https://ipdk.io/p4cp-userguide/apps/lnw/lnw-index.html)
+application.
diff --git a/p4src/linux_networking/dpdk/README.md b/p4src/linux_networking/dpdk/README.md
new file mode 100644
index 00000000..18d33a6a
--- /dev/null
+++ b/p4src/linux_networking/dpdk/README.md
@@ -0,0 +1,15 @@
+
+
+Linux Networking Recipe (DPDK)
+==============================
+
+Reference P4 program for the Linux Networking recipe.
+
+See the [P4 Control Plane User Guide](https://ipdk.io/p4cp-userguide/index.html)
+for more information.
+
+- [Linux Networking for DPDK](https://ipdk.io/p4cp-userguide/apps/lnw/dpdk/dpdk-linux-networking.html)
+- [Linux Networking with ECMP (DPDK)](https://ipdk.io/p4cp-userguide/apps/lnw/dpdk/dpdk-linux-networking-ecmp.html)
diff --git a/p4src/linux_networking/dpdk/README_LINUX_NETWORKING.md b/p4src/linux_networking/dpdk/README_LINUX_NETWORKING.md
deleted file mode 100644
index a2d035f2..00000000
--- a/p4src/linux_networking/dpdk/README_LINUX_NETWORKING.md
+++ /dev/null
@@ -1,290 +0,0 @@
-
-
-## Table of contents
-1. [Overview](#overview)
-2. [Topology](#topology)
-3. [Create P4 artifacts](#create_p4_artifacts)
-4. [Limitations](#limitations)
-5. [Steps to create topology](#steps)
-
-## Overview
-This README describes a step by step procedure to run Linux networking scenario
-
-## Topology
-
-
-
-
-
-* Notes about topology:
- * VHOST ports, TAP ports, Physical LINK ports are created by GNMI CLI and LINK port is binded to the DPDK target
- * VLAN 1, VLAN 2, .... VLAN N created using Linux commands and are on top a TAP port. These VLAN ports should be equal to number of VM's that are spawned.
- * br-int, VxLAN0 ports are created using ovs-vsctl command provided by the networking recipe and all the VLAN ports are attached to br-int using ovs-vsctl command.
-
-System under test will have above topology running the networking recipe. Link Partner can have either the networking recipe or legacy OVS or kernel VxLAN. Note the limitations below before setting up the topology.
-
-## Create P4 artifacts
-- Install p4c compiler from p4lang/p4c() repository and follow the readme for procedure
-- Set the environment variable OUTPUT_DIR to the location where artifacts should be generated and where p4 files are available
- eg. export OUTPUT_DIR=$IPDK_RECIPE/p4src/linux_networking/
-- Generate the artifacts using p4c-dpdk installed in previous step using command below:
-```
- p4c-dpdk --arch pna --target dpdk --p4runtime-files $OUTPUT_DIR/p4Info.txt --bf-rt-schema $OUTPUT_DIR/bf-rt.json --context $OUTPUT_DIR/context.json -o $OUTPUT_DIR/linux_networking.spec $OUTPUT_DIR/linux_networking.p4
-```
-- Modify sample lnw.conf file available in $IPDK_RECIPE/p4src/linux_networking/ to specify absolute path of the artifacts (json and spec files)
-- Generate binary execuatble using tdi-pipeline builder command below:
-```
-tdi_pipeline_builder --p4c_conf_file=lnw.conf --bf_pipeline_config_binary_file=lnw.pb.bin
-```
-
-## Limitations
-Current SAI enablement for the networking recipe has following limitations:
-- Always all VHOST user ports need to be configured first and only then TAP ports/physical ports
-- TAP port created for the corresponding link port should be created using "gnmi-ctl control port creation got the link port"
-eg: gnmi-ctl set "device:physical-device,name:PORT0,pipeline-name:pipe,mempool-name:MEMPOOL0,control-port:TAP1,mtu:1500,pci-bdf:0000:18:00.0,packet-dir=network,port-type:link"
-- All VLAN interfaces created on top of TAP ports, should always be in lowercase format "vlan+vlan_id"
-Ex: vlan1, vlan2, vlan3 …. vlan4094
-- br-int port, vxlan0 port and adding vlan ports to br-int need to be done after loading the pipeline.
-- VxLAN destination port should always be standard port. i.e., 4789. (limitation by p4 parser)
-- Only VNI 0 is supported.
-- We are not supporting any ofproto rules which would not allow for FDB learning on OVS.
-
-## Steps to create the topology
- *Note*: gnmi-ctl and p4rt-ctl utility used in below steps can be found under $IPDK_RECIPE/install/bin and should be run with `sudo`
-#### 1. Bind the physical port (Port 0 and Port 1) to user-space IO driver:
-* Load uio and vfio-pci driver
-```
- modprobe uio
- modprobe vfio-pci
-```
-* Bind the devices to DPDK using dpdk-devbind.py script
-```
- cd $SDE_INSTALL/bin
- ./dpdk-devbind.py --bind=vfio-pci eg: ./dpdk-devbind.py --bind=vfio-pci 0000:18:00.0
-```
- *Note*: pci_bdf can be obtained using lspci command. Check if device is binded correctly using ./dpdk-devbind.py -s (Refer to the section "Network devices using DPDK-compatible driver")
-
-#### 2. Export the environment variables LD_LIBRARY_PATH, IPDK_RECIPE and SDE_INSTALL and start running the infrap4d
-```
- alias sudo='sudo PATH="$PATH" HOME="$HOME" LD_LIBRARY_PATH="$LD_LIBRARY_PATH" SDE_INSTALL="$SDE_INSTALL"
- sudo $IPDK_RECIPE/install/bin/infrap4d
-```
-#### 3. Create two VHOST user ports:
-```
- gnmi-ctl set "device:virtual-device,name:net_vhost0,host-name:host1,device-type:VIRTIO_NET,queues:1,socket-path:/tmp/vhost-user-0,packet-dir:host,port-type:LINK"
- gnmi-ctl set "device:virtual-device,name:net_vhost1,host-name:host2,device-type:VIRTIO_NET,queues:1,socket-path:/tmp/vhost-user-1,packet-dir:host,port-type:LINK"
-```
-
-#### 4. Create two physical link ports with control port:
-```
- gnmi-ctl set "device:physical-device,name:PORT0,control-port:TAP1,pci-bdf:0000:18:00.0,packet-dir:network,port-type:link"
- gnmi-ctl set "device:physical-device,name:PORT1,control-port:TAP2,pci-bdf:0000:18:00.1,packet-dir:network,port-type:link"
-
-Note: Specify the pci-bdf of the devices binded to user-space in step 1. Corresponding control port for physical link port will be created if control port attribute is specified.
-```
-
-#### 5. Create two TAP ports:
-```
- gnmi-ctl set "device:virtual-device,name:TAP0,pipeline-name:pipe,mempool-name:MEMPOOL0,mtu:1500,packet-dir:host,port-type:TAP"
- gnmi-ctl set "device:virtual-device,name:TAP3,pipeline-name:pipe,mempool-name:MEMPOOL0,mtu:1500,packet-dir:host,port-type:TAP"
-
-Note:
- - Pkt-dir parameter is to specify the direction of traffic. It can take 2 values - host/network. Value 'host' specifies that traffic on this port will be internal(within the host). Value 'network' specifies that a particular port can receive traffic from network.
- - Ensure that no. of ports created should be power of 2 to satisfy DPDK requirements and when counting no. of ports, count control ports created along with physical link port(eg.: TAP1 and TAP2)
-```
-#### 6. Spawn two VM's on vhost-user ports created in step 3, start the VM's and assign IP's
-```
- ip addr add 99.0.0.1/24 dev eth0
- ip link set dev eth0 up
- ip addr add 99.0.0.2/24 dev eth0
- ip link set dev eth0 up
-```
-#### 7. Bring up the TAP and or dummy interfaces
- - 7.1 Option 1: Use one of the TAP ports as tunnel termination and assign IP address to the TAP port.
- ```
- ip link set dev TAP0 up
- ip addr add 40.1.1.1/24 dev TAP1
- ip link set dev TAP1 up
- ip link set dev TAP2 up
- ip link set dev TAP3 up
- ```
- - 7.2 Option 2: Create a dummy port and use it for tunnel termination. Route to reach dummy port will be statically configured on peer or this route will be re-distributed to the peer via routing protocols available from FRR.
- ```
- ip link add dev TEP1 type dummy
-
- ip link set dev TAP0 up
- ip link set dev TAP1 up
- ip link set dev TAP2 up
- ip link set dev TAP3 up
- ip link set dev TEP1 up
- ```
-
-#### 8. Set the pipeline
-```
- p4rt-ctl set-pipe br0 lnw.pb.bin p4Info.txt
-```
-
-#### 9. Start and run ovs-vswitchd server and ovsdb-server
-Kill any existing ovs process if running.
-```
-mkdir -p $IPDK_RECIPE/install/var/run/openvswitch
-rm -rf $IPDK_RECIPE/install/etc/openvswitch/conf.db
-
-sudo $IPDK_RECIPE/install/bin/ovsdb-tool create $IPDK_RECIPE/install/etc/openvswitch/conf.db $IPDK_RECIPE/install/share/openvswitch/vswitch.ovsschema
-
-export RUN_OVS=$IPDK_RECIPE/install
-
-sudo $IPDK_RECIPE/install/sbin/ovsdb-server \
- --remote=punix:$RUN_OVS/var/run/openvswitch/db.sock \
- --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
- --pidfile --detach
-sudo $IPDK_RECIPE/install/sbin/ovs-vswitchd --detach --no-chdir unix:$RUN_OVS/var/run/openvswitch/db.sock --mlockall --log-file=/tmp/ovs-vswitchd.log
-
-sudo $IPDK_RECIPE/install/bin/ovs-vsctl --db unix:$RUN_OVS/var/run/openvswitch/db.sock show
-
-sudo $IPDK_RECIPE/install/bin/ovs-vsctl add-br br-int
-ifconfig br-int up
-```
-
-#### 10. Configure VXLAN port
- - 10.1 Option 1: When one of the TAP ports is used for tunnel termination.
-```
- sudo $IPDK_RECIPE/install/bin/ovs-vsctl add-port br-int vxlan1 -- set interface vxlan1 type=vxlan options:local_ip=40.1.1.1 options:remote_ip=40.1.1.2 options:dst_port=4789
-```
- - 10.2 Option 2: When a dummy port is used for tunnel termination. Here remote IP is on a different network, route to reach peer need to be statically configure (refer section 14.1) or learn via FRR (refer section 14.2).
-```
- sudo $IPDK_RECIPE/install/bin/ovs-vsctl add-port br-int vxlan1 -- set interface vxlan1 type=vxlan options:local_ip=40.1.1.1 options:remote_ip=30.1.1.1 options:dst_port=4789
-```
-Note:
- - VXLAN destination port should always be standard port. i.e., 4789. (limitation by p4 parser)
-
-#### 11. Configure VLAN ports on TAP0 and add them to br-int
-```
- ip link add link TAP0 name vlan1 type vlan id 1
- ip link add link TAP0 name vlan2 type vlan id 2
- sudo $IPDK_RECIPE/install/bin/ovs-vsctl add-port br-int vlan1
- sudo $IPDK_RECIPE/install/bin/ovs-vsctl add-port br-int vlan2
- ip link set dev vlan1 up
- ip link set dev vlan2 up
-```
-Note:
- - All VLAN interfaces should be created on top of TAP ports, and should always be in lowercase format "vlan+vlan_id. (Eg: vlan1, vlan2, vlan3 …. vlan4094)
-
-
-#### 12. Configure rules to push and pop VLAN from vhost 0 and 1 ports to TAP0 port (vhost-user and vlan port mapping)
-```
-Note: Port number used in p4rt-ctl commands are target datapath indexes(unique identifier for each port) which can be queried using following commands below. With current SDE implementation, both tdi-portin-id and tdi-portout-id are the same.
-
- gnmi-ctl get "device:virtual-device,name:net_vhost0,tdi-portin-id"
- gnmi-ctl get "device:virtual-device,name:net_vhost0,tdi-portout-id"
-
- Target DP index of control TAP port will be Target DP index of corresponding physical port + 1. If the ports are created in the order mentioned in the above step, target datapath indexes will be:
-
- Port name Target datapath index
- vhost-user-0(VM1) - 0
- vhost-user-1(VM2) - 1
- phy-port0 - 2
- TAP1 - 3
- phy-port1 - 4
- TAP2 - 5
- TAP0 - 6
- TAP3 - 7
-```
- - **Rule for: Any tx control packet from VM1(TDP 0), pipeline should add a VLAN tag 1 and send it to TAP0 port(TDP 6)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_pkts_table "istd.input_port=0,action=linux_networking_control.push_vlan_fwd(6,1)"
- ```
- - **Rule for: Any tx control packet from VM2(TDP 1), pipeline should add a VLAN tag 2 and send it to TAP0 port(TDP 6)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_pkts_table "istd.input_port=1,action=linux_networking_control.push_vlan_fwd(6,2)"
- ```
- - **Rule for: Any tx control packet from TAP0 port(TDP 6) with VLAN tag 1, pipeline should pop the VLAN tag and send it to VM1(TDP 0)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_vlan_pkts_table "istd.input_port=6,local_metadata.vlan_id=1,action=linux_networking_control.pop_vlan_fwd(0)"
- ```
-
- - **Rule for: Any tx control packet from TAP0 port(TDP 6) with VLAN tag 2, pipeline should pop the VLAN tag and send it to VM2(TDP 1)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_vlan_pkts_table "istd.input_port=6,local_metadata.vlan_id=2,action=linux_networking_control.pop_vlan_fwd(1)"
- ```
-
-#### 13. Configure rules for control packets coming in and out of physical port
- - **Rule for: Any rx control packet from phy port0(TDP 2) should be sent it to it corresponding control port TAP1(TDP 3)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_rx_control_pkts_table "istd.input_port=2,action=linux_networking_control.set_control_dest(3)"
- ```
- - **Rule for: Any rx control packet from phy port1(TDP 4) should be sent it to it corresponding control port TAP2(TDP 5)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_rx_control_pkts_table "istd.input_port=4,action=linux_networking_control.set_control_dest(5)"
- ```
- - **Rule for: Any tx control packet from control TAP1 port(TDP 3) should be sent it to it corresponding physical port phy port0(TDP 2)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_pkts_table "istd.input_port=3,action=linux_networking_control.set_control_dest(2)"
- ```
- - **Rule for: Any tx control packet from control TAP2 port(TDP 5) should be sent it to it corresponding physical port phy port1(TDP 4)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_pkts_table "istd.input_port=5,action=linux_networking_control.set_control_dest(4)"
- ```
-
-#### 14. Configure routes only when dummy port is used for tunnel termination.
- - 14.1 Option 1: Configure static route.
-
- ```
- ip addr add 40.1.1.1/24 dev TEP1
- ip addr add 50.1.1.1/24 dev TAP1
- ip route add 30.1.1.1 nexthop via 50.1.1.2 dev TAP1
- ```
-
- - 14.2 Option 2: Learn dynamic routes via FRR (iBGP route distribution)
- - 14.2.1 Install FRR
- - Install FRR via default package manager, like "apt install frr" for Ubuntu /"dnf install frr" for Fedora.
- - If not, refer to official FRR documentation available at https://docs.frrouting.org/en/latest/installation.html and install according to your distribution.
- - 14.2.2 Configure FRR
- - Modify /etc/frr/daemons to enable bgpd daemon
- - Restart FRR service. systemctl restart frr
- - Start VTYSH process, which is a CLI provided by FRR for user configurations.
- - Set below configuration on the DUT (host1) for singlepath scenario.
- ```
- interface TAP1
- ip address 50.1.1.1/24
- exit
- !
- interface TEP1
- ip address 40.1.1.1/24
- exit
- !
- router bgp 65000
- bgp router-id 40.1.1.1
- neighbor 50.1.1.2 remote-as 65000
- !
- address-family ipv4 unicast
- network 40.1.1.0/24
- exit-address-family
- ```
- - Once Peer is also configured, we should see neighbor 50.1.1.2 is learnt on DUT (host1) and also route learnt on the kernel.
- ```
- 30.1.1.0/24 nhid 54 via 50.1.1.2 dev TAP1 proto bgp metric 20
- ```
-
-#### 15. Test the ping scenarios:
- - Ping between VM's on the same host
- - Underlay ping
- - Overlay ping: Ping between VM's on different hosts
diff --git a/p4src/linux_networking/dpdk/README_LINUX_NETWORKING_WITH_ECMP.md b/p4src/linux_networking/dpdk/README_LINUX_NETWORKING_WITH_ECMP.md
deleted file mode 100644
index c8378c6f..00000000
--- a/p4src/linux_networking/dpdk/README_LINUX_NETWORKING_WITH_ECMP.md
+++ /dev/null
@@ -1,300 +0,0 @@
-
-
-## Table of contents
-1. [Overview](#overview)
-2. [Topology](#topology)
-3. [Create P4 artifacts](#create_p4_artifacts)
-3. [Limitations](#limitations)
-4. [Steps to create topology](#steps)
-
-## Overview
-This README describes a step by step procedure to run Linux networking scenario with ECMP enabled for underlay connectivity.
-
-## Topology
-
-
-
-
-
-* Notes about topology:
- * VHOST ports, TAP ports, Physical LINK ports are created by GNMI CLI and LINK port is binded to the DPDK target
- * VLAN 1, VLAN 2, .... VLAN N created using Linux commands and are on top a TAP port. These VLAN ports should be equal to number of VM's that are spawned.
- * br-int, VxLAN0 ports are created using ovs-vsctl command provided by the networking recipe and all the VLAN ports are attached to br-int using ovs-vsctl command.
- * TEP port is of type dummy, created using ip utility command and this port is used as the VxLAN tunnel termination port.
-
-System under test will have above topology running the networking recipe. Link Partner can have either the networking recipe or legacy OVS or kernel VxLAN. Both the Physical ports from System under test and Link Partner should be connected back to back. Note the limitations below before setting up the topology.
-
-## Create P4 artifacts
-- Install p4c compiler from p4lang/p4c() repository and follow the readme for procedure
-- Set the environment variable OUTPUT_DIR to the location where artifacts should be generated and where p4 files are available
- eg. export OUTPUT_DIR=/root/ovs/p4proto/p4src/linux_networking/
-- Generate the artifacts using p4c-dpdk installed in previous step using command below:
-```
- p4c-dpdk --arch pna --target dpdk --p4runtime-files $OUTPUT_DIR/p4Info.txt --bf-rt-schema $OUTPUT_DIR/bf-rt.json --context $OUTPUT_DIR/context.json -o $OUTPUT_DIR/linux_networking.spec linux_networking.p4
-```
-- Modify sample lnw.conf file available in $IPDK_RECIPE/p4src/linux_networking/ to specify absolute path of the artifacts (json and spec files)
-- Generate binary execuatble using tdi-pipeline builder command below:
-```
-tdi_pipeline_builder --p4c_conf_file=lnw.conf --bf_pipeline_config_binary_file=lnw.pb.bin
-```
-
-## Limitations
-Current SAI implementation for the networking recipe has following limitations:
-- Always all VHOST user ports need to be configured first and only then TAP ports/physical ports
-- VM's Spawned on top of VHOST user ports should be UP and running, interfaces with in VM should be brought up before loading the "forwarding pipeline" (limitation by target)
-- TAP port created for the corresponding link port should be created using "gnmi-ctl control port creation for the link port"
-eg: gnmi-ctl set "device:physical-device,name:PORT0,pipeline-name:pipe,mempool-name:MEMPOOL0,control-port:TAP1,mtu:1500,pci-bdf:0000:18:00.0,packet-dir=network,port-type:link"
-- All VLAN interfaces created on top of TAP ports, should always be in lowercase format "vlan+vlan_id"
-Ex: vlan1, vlan2, vlan3 …. vlan4094
-- br-int port, vxlan0 port and adding vlan ports to br-int need to be done after loading the pipeline.
-- VxLAN destination port should always be standard port. i.e., 4789. (limitation by p4 parser)
-- Only VNI 0 is supported.
-- We are not supporting any ofproto rules which would not allow for FDB learning on OVS.
-- Make sure underlay connectivity for both the nexthops is established before configuring multipath to reach the link partner. When using FRR, the routing protocol will establish underlay connectivity and redistribute routes.
-
-
-## Steps to create the topology
-#### 1. Bind the physical port (Port 0 and Port 1) to user-space IO driver:
-* Load uio and vfio-pci driver
-```
- modprobe uio
- modprobe vfio-pci
-```
-* Bind the devices to DPDK using dpdk-devbind.py script
-```
- cd $SDE_INSTALL/bin
- ./dpdk-devbind.py --bind=vfio-pci eg: ./dpdk-devbind.py --bind=vfio-pci 0000:18:00.0
-```
- *Note*: pci_bdf can be obtained using lspci command. Check if device is binded correctly using ./dpdk-devbind.py -s (Refer to the section "Network devices using DPDK-compatible driver")
-
-#### 2. Export the environment variables LD_LIBRARY_PATH, IPDK_RECIPE and SDE_INSTALL and start running the infrap4d
-```
- alias sudo='sudo PATH="$PATH" HOME="$HOME" LD_LIBRARY_PATH="$LD_LIBRARY_PATH" SDE_INSTALL="$SDE_INSTALL"
- sudo $IPDK_RECIPE/install/bin/infrap4d
-```
-
-#### 3. Create two VHOST user ports:
-```
- gnmi-ctl set "device:virtual-device,name:net_vhost0,host-name:host1,device-type:VIRTIO_NET,queues:1,socket-path:/tmp/vhost-user-0,packet-dir:host,port-type:LINK"
- gnmi-ctl set "device:virtual-device,name:net_vhost1,host-name:host2,device-type:VIRTIO_NET,queues:1,socket-path:/tmp/vhost-user-1,packet-dir:host,port-type:LINK"
-```
-
-#### 4. Create two physical link ports with control port:
-```
- gnmi-ctl set "device:physical-device,name:PORT0,control-port:TAP1,pci-bdf:0000:18:00.0,packet-dir:network,port-type:link"
- gnmi-ctl set "device:physical-device,name:PORT1,control-port:TAP2,pci-bdf:0000:18:00.1,packet-dir:network,port-type:link"
-
-Note: Specify the pci-bdf of the devices binded to user-space in step 1. Corresponding control port for physical link port will be created if control port attribute is specified.
-```
-
-#### 5. Create two TAP ports:
-```
- gnmi-ctl set "device:virtual-device,name:TAP0,pipeline-name:pipe,mempool-name:MEMPOOL0,mtu:1500,packet-dir:host,port-type:TAP"
- gnmi-ctl set "device:virtual-device,name:TAP3,pipeline-name:pipe,mempool-name:MEMPOOL0,mtu:1500,packet-dir:host,port-type:TAP"
-
-Note:
- - Pkt-dir parameter is to specify the direction of traffic. It can take 2 values - host/network. Value 'host' specifies that traffic on this port will be internal(within the host). Value 'network' specifies that a particular port can receive traffic from network.
- - Ensure that no. of ports created should be power of 2 to satisfy DPDK requirements and when counting no. of ports, count control ports created along with physical link port(eg.: TAP1 and TAP2)
-```
-#### 6. Spawn two VM's on vhost-user ports created in step 3, start the VM's and assign IP's
-```
- ip addr add 99.0.0.1/24 dev eth0
- ip link set dev eth0 up
- ip addr add 99.0.0.2/24 dev eth0
- ip link set dev eth0 up
-```
-#### 7. Configure tunnel termination port of type dummy
-```
- ip link add dev TEP1 type dummy
-```
-#### 8. Bring up the TAP and dummy interfaces
-```
- ip link set dev TAP0 up
- ip link set dev TAP1 up
- ip link set dev TAP2 up
- ip link set dev TAP3 up
- ip link set dev TEP1 up
-```
-
-#### 9. Set the pipeline
-```
- p4rt-ctl set-pipe br0 lnw.pb.bin p4Info.txt
-```
-
-#### 10. Start and run ovs-vswitchd server and ovsdb-server
-Kill any existing ovs process if running.
-```
-mkdir -p $IPDK_RECIPE/install/var/run/openvswitch
-rm -rf $IPDK_RECIPE/install/etc/openvswitch/conf.db
-
-sudo $IPDK_RECIPE/install/bin/ovsdb-tool create \
- $IPDK_RECIPE/install/etc/openvswitch/conf.db \
- $IPDK_RECIPE/install/share/openvswitch/vswitch.ovsschema
-
-export RUN_OVS=$IPDK_RECIPE/install
-
-sudo $IPDK_RECIPE/install/sbin/ovsdb-server \
- --remote=punix:$RUN_OVS/var/run/openvswitch/db.sock \
- --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
- --pidfile --detach
-
-sudo $IPDK_RECIPE/install/sbin/ovs-vswitchd --detach --no-chdir \
- unix:$RUN_OVS/var/run/openvswitch/db.sock \
- --mlockall --log-file=/tmp/ovs-vswitchd.log
-
-sudo $IPDK_RECIPE/install/bin/ovs-vsctl --db \
- unix:$RUN_OVS/var/run/openvswitch/db.sock show
-
-sudo $IPDK_RECIPE/install/bin/ovs-vsctl add-br br-int
-ifconfig br-int up
-```
-
-#### 11. Configure VXLAN port
- Using one of the TAP ports for tunnel termination.
-```
- sudo $IPDK_RECIPE/install/bin/ovs-vsctl add-port br-int vxlan1 -- set interface vxlan1 type=vxlan options:local_ip=40.1.1.1 options:remote_ip=30.1.1.1 options:dst_port=4789
-```
-Note:
- - VXLAN destination port should always be standard port. i.e., 4789. (limitation by p4 parser)
- - Remote IP is on another network and route to reach this can be configured statically (refer section 14.1) or dynamically learn via routing protocols supported by FRR (refer section 14.2)
-
-
-#### 12. Configure VLAN ports on TAP0 and add them to br-int
-```
- ip link add link TAP0 name vlan1 type vlan id 1
- ip link add link TAP0 name vlan2 type vlan id 2
- sudo $IPDK_RECIPE/install/bin/ovs-vsctl add-port br-int vlan1
- sudo $IPDK_RECIPE/install/bin/ovs-vsctl add-port br-int vlan2
- ip link set dev vlan1 up
- ip link set dev vlan2 up
-```
-Note:
- - All VLAN interfaces should be created on top of TAP ports, and should always be in lowercase format "vlan+vlan_id. (Eg: vlan1, vlan2, vlan3 …. vlan4094)
-
-#### 13. Configure rules to push and pop VLAN from vhost 0 and 1 ports to TAP0 port (vhost-user and vlan port mapping)
-```
-Note: Port number used in p4rt-ctl commands are target datapath indexes(unique identifier for each port) which can be queried using following commands below. With current SDE implementation, both tdi-portin-id and tdi-portout-id are the same.
-
- gnmi-ctl get "device:virtual-device,name:net_vhost0,tdi-portin-id"
- gnmi-ctl get "device:virtual-device,name:net_vhost0,tdi-portout-id"
-
- Target DP index of control TAP port will be Target DP index of corresponding physical port + 1. If the ports are created in the order mentioned in the above step, target datapath indexes will be:
-
- Port name Target datapath index
- vhost-user-0(VM1) - 0
- vhost-user-1(VM2) - 1
- phy-port0 - 2
- TAP1 - 3
- phy-port1 - 4
- TAP2 - 5
- TAP0 - 6
- TAP3 - 7
-```
- - **Rule for: Any tx control packet from VM1(TDP 0), pipeline should add a VLAN tag 1 and send it to TAP0 port(TDP 6)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_pkts_table "istd.input_port=0,action=linux_networking_control.push_vlan_fwd(6,1)"
- ```
- - **Rule for: Any tx control packet from VM2(TDP 1), pipeline should add a VLAN tag 2 and send it to TAP0 port(TDP 6)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_pkts_table "istd.input_port=1,action=linux_networking_control.push_vlan_fwd(6,2)"
- ```
- - **Rule for: Any tx control packet from TAP0 port(TDP 6) with VLAN tag 1, pipeline should pop the VLAN tag and send it to VM1(TDP 0)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_vlan_pkts_table "istd.input_port=6,local_metadata.vlan_id=1,action=linux_networking_control.pop_vlan_fwd(0)"
- ```
-
- - **Rule for: Any tx control packet from TAP0 port(TDP 6) with VLAN tag 2, pipeline should pop the VLAN tag and send it to VM2(TDP 1)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_vlan_pkts_table "istd.input_port=6,local_metadata.vlan_id=2,action=linux_networking_control.pop_vlan_fwd(1)"
- ```
-
-#### 14. Configure rules for control packets coming in and out of physical port
- - **Rule for: Any rx control packet from phy port0(TDP 2) should be sent it to it corresponding control port TAP1(TDP 3)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_rx_control_pkts_table "istd.input_port=2,action=linux_networking_control.set_control_dest(3)"
- ```
- - **Rule for: Any rx control packet from phy port1(TDP 4) should be sent it to it corresponding control port TAP2(TDP 5)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_rx_control_pkts_table "istd.input_port=4,action=linux_networking_control.set_control_dest(5)"
- ```
- - **Rule for: Any tx control packet from control TAP1 port(TDP 3) should be sent it to it corresponding physical port phy port0(TDP 2)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_pkts_table "istd.input_port=3,action=linux_networking_control.set_control_dest(2)"
- ```
- - **Rule for: Any tx control packet from control TAP2 port(TDP 5) should be sent it to it corresponding physical port phy port1(TDP 4)**
- ```
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_control_pkts_table "istd.input_port=5,action=linux_networking_control.set_control_dest(4)"
- ```
-
-#### 15. Configure ECMP for underlay connectivity
-- Rule: To reach link partner multiple paths are configured, packets can be hashed to any port where the nexthop's ARP is learnt
-- Nexthop is selected based on a 5-tuple parameter: source IPv4 address, destination IPv4 address, protocol type, UDP source port and UDP destination port of the overlay packet.
- - 15.1 Option 1: Configure static routes.
- ```
- ip addr add 40.1.1.1/24 dev TEP1
- ip addr add 50.1.1.1/24 dev TAP1
- ip addr add 60.1.1.1/24 dev TAP2
- ip route add 30.1.1.1 nexthop via 50.1.1.2 dev TAP1 weight 1 nexthop via 60.1.1.2 dev TAP2 weight 1
- ```
- - 15.2 Option 2: Learn dynamic routes via FRR (iBGP route distribution)
- - 15.2.1 Install FRR
- - Install FRR via default package manager, like "apt install frr" for Ubuntu /"dnf install frr" for Fedora.
- - If not, refer to official FRR documentation available at https://docs.frrouting.org/en/latest/installation.html and install according to your distribution.
- - 15.2.2 Configure FRR
- - Modify /etc/frr/daemons to enable bgpd daemon
- - Restart FRR service. systemctl restart frr
- - Start VTYSH process, which is a CLI for user configuration.
- - Set below configuration on the DUT (host1) for Multipath scenario.
- ```
- interface TAP1
- ip address 50.1.1.1/24
- exit
- !
- interface TAP2
- ip address 60.1.1.1/24
- exit
- !
- interface TEP1
- ip address 40.1.1.1/24
- exit
- !
- router bgp 65000
- bgp router-id 40.1.1.1
- neighbor 50.1.1.2 remote-as 65000
- neighbor 60.1.1.2 remote-as 65000
- !
- address-family ipv4 unicast
- network 40.1.1.0/24
- exit-address-family
- ```
- - Once Peer is also configured, we should see neighbors 50.1.1.2 and 60.1.1.2 ARP's are learnt on DUT (host1) and also route learnt on the kernel.
- ```
- 30.1.1.0/24 nhid 72 proto bgp metric 20
- nexthop via 60.1.1.2 dev TAP2 weight 1
- nexthop via 50.1.1.2 dev TAP1 weight 1
- ```
-
-#### 16. Test the ping scenarios:
- - Underlay ping for both ECMP nexthop's
- - Ping between VM's on the same host
- - Underlay ping for VxLAN tunnel termination port
- - Overlay ping: Ping between VM's on different hosts and validate hashing
-
diff --git a/p4src/linux_networking/dpdk/topology_linux_networking.PNG b/p4src/linux_networking/dpdk/topology_linux_networking.PNG
deleted file mode 100644
index c9ea7078..00000000
Binary files a/p4src/linux_networking/dpdk/topology_linux_networking.PNG and /dev/null differ
diff --git a/p4src/linux_networking/dpdk/topology_linux_networking_with_ecmp.PNG b/p4src/linux_networking/dpdk/topology_linux_networking_with_ecmp.PNG
deleted file mode 100644
index 5381e1d7..00000000
Binary files a/p4src/linux_networking/dpdk/topology_linux_networking_with_ecmp.PNG and /dev/null differ
diff --git a/p4src/linux_networking/es2k/README_LINUX_NETWORKING.md b/p4src/linux_networking/es2k/README_LINUX_NETWORKING.md
deleted file mode 100644
index 946af1e1..00000000
--- a/p4src/linux_networking/es2k/README_LINUX_NETWORKING.md
+++ /dev/null
@@ -1,263 +0,0 @@
-
-
-## Table of contents
-
-1. [Overview](#overview)
-2. [Topology](#topology)
-3. [Create P4 artifacts and start Infrap4d process](#create_p4_artifacts_and_start_infrap4d_process)
-4. [Steps to create topology](#steps)
-5. [Limitations](#limitations)
-
-## Overview
-
-This README describes a step-by-step procedure to run the Linux networking scenario on ES2K.
-
-## Topology
-
-
-
-
-
-
-* Notes about topology:
- * Four Kernel netdevs are created by default by loading IDPF driver during ACC bring-up. User can also create more than Four netdevs. For that, we need to modify `acc_apf` parameter under `num_default_vport` in `/etc/dpcp/cfg/cp_init.cfg` on IMC before starting `run_default_init_app`.
- * In `/etc/dpcp/cfg/cp_init.cfg` file also modify default `sem_num_pages` value to the value mentioned in `/opt/p4/p4sde/share/mev_reference_p4_files/linux_networking/README_P4_CP_NWS`.
- * vlan1, vlan2, .... vlanN created using Linux commands and are on top of an IDPF Netdev. These VLAN ports should be equal to number of VM's that are spawned.
- * br-int, VxLAN ports are created using ovs-vsctl command provided by the networking recipe and all the vlan ports are attached to br-int using ovs-vsctl command.
-
-System under test will have above topology running the networking recipe. Link Partner can have the networking recipe or legacy OvS or kernel VxLAN. Note the [Limitations](#limitations) section before setting up the topology.
-
-## Create P4 artifacts and start Infrap4d process
-
-- Use Linux networking p4 program present in the directory `/opt/p4/p4sde/share/mev_reference_p4_files/linux_networking` for this scenario.
-- Refer to [Running Infrap4d on Intel IPU E2100](https://github.com/ipdk-io/networking-recipe/blob/main/docs/guides/es2k/running-infrap4d.md) for compiling `P4 artifacts`, `bringing up ACC` and running `infrap4d` on ACC.
-
-## Steps to create the topology
-
- *Note*: p4rt-ctl and ovs-vsctl utilities used in below steps can be found under $P4CP_INSTALL/bin
-
-#### 1. Set the forwarding pipeline
-
-Once the application is started, set the forwarding pipeline config using
-P4Runtime Client `p4rt-ctl` set-pipe command
-
-```bash
-$P4CP_INSTALL/bin/p4rt-ctl set-pipe br0 $OUTPUT_DIR/linux_networking.pb.bin $OUTPUT_DIR/linux_networking.p4info.txt
-```
-Note: Assuming `linux_networking.pb.bin` and `linux_networking.p4info.txt` along with other P4 artifacts are created as per the steps mentioned in previous section.
-
-#### 2. Configure VSI Group and add a netdev
-
-Use one of the IPDF netdevs on ACC to receive all control packets from overlay VM's by assigning to a VSI group. VSI group 3 is dedicated for this configuration, execute below devmem commands on IMC.
-
-```
-# SEM_DIRECT_MAP_PGEN_CTRL: LSB 11-bit is for vsi which need to map into vsig
-devmem 0x20292002a0 64 0x8000050000000008
-
-# SEM_DIRECT_MAP_PGEN_DATA_VSI_GROUP : This will set vsi (set in SEM_DIRECT_MAP_PGEN_CTRL register LSB) into VSIG-3
-devmem 0x2029200388 64 0x3
-
-# SEM_DIRECT_MAP_PGEN_CTRL: LSB 11-bit is for vsi which need to map into vsig
-devmem 0x20292002a0 64 0xA000050000000008
-```
-
-Note: Here VSI 8 has been used for receiving all control packets and added to VSI group 3. This refers to HOST netdev VSIG 3 as per the topology diagram. Modify this VSI based on your configuration.
-
-#### 3. Create Overlay network
-
-Option 1: Create VF's on HOST and spawn VM's on top of those VF's.
-Example to create 4 VF's: echo 4 > /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/sriov_numvfs
-```
-# VM1 configuration
-telnet
-ip addr add 99.0.0.1/24 dev
-ifconfig up
-
-# VM2 configuration
-telnet
-ip addr add 99.0.0.2/24 dev
-ifconfig up
-```
-
-Option 2: If we are unable to spawn VM's on top of the VF's, for this use case we can also leverage kernel network namespaces.
-Move each VF to a network namespace and assign IP addresses
-```
-ip netns add VM0
-ip link set netns VM0
-ip netns exec VM0 ip addr add 99.0.0.1/24 dev
-ip netns exec VM0 ifconfig up
-
-ip netns add VM1
-ip link set netns VM1
-ip netns exec VM1 ip addr add 99.0.0.2/24 dev
-ip netns exec VM1 ifconfig up
-```
-
-#### 4. Start OvS as a separate process
-
-Legacy OvS is used as a control plane for source MAC learning of overlay VM's. OvS should be started as a seperate process.
-```
-export RUN_OVS=/tmp
-rm -rf $RUN_OVS/etc/openvswitch
-rm -rf $RUN_OVS/var/run/openvswitch
-mkdir -p $RUN_OVS/etc/openvswitch/
-mkdir -p $RUN_OVS/var/run/openvswitch
-
-ovsdb-tool create $RUN_OVS/etc/openvswitch/conf.db /opt/p4/p4-cp-nws/share/openvswitch/vswitch.ovsschema
-
-ovsdb-server $RUN_OVS/etc/openvswitch/conf.db \
- --remote=punix:$RUN_OVS/var/run/openvswitch/db.sock \
- --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
- --pidfile=$RUN_OVS/var/run/openvswitch/ovsdb-server.pid \
- --unixctl=$RUN_OVS/var/run/openvswitch/ovsdb-server.ctl \
- --detach
-
-ovs-vswitchd --detach \
- --pidfile=$RUN_OVS/var/run/openvswitch/ovs-vswitchd.pid \
- --no-chdir unix:$RUN_OVS/var/run/openvswitch/db.sock \
- --unixctl=$RUN_OVS/var/run/openvswitch/ovs-vswitchd.ctl \
- --mlockall \
- --log-file=/tmp/ovs-vswitchd.log
-
-alias ovs-vsctl="ovs-vsctl --db unix:$RUN_OVS/var/run/openvswitch/db.sock"
-ovs-vsctl set Open_vSwitch . other_config:n-revalidator-threads=1
-ovs-vsctl set Open_vSwitch . other_config:n-handler-threads=1
-
-ovs-vsctl show
-```
-
-#### 5. Create VLAN representers
-
-For each VM that is spawned for overlay network we need to have a port representer. We create VLAN netdevs on top of the IPDF netdev which is assigned to VSI group 3 in step-2 mentioned above.
-
-```
-ip link add link name vlan1 type vlan id 1
-ip link add link name vlan2 type vlan id 2
-ifconfig vlan1 up
-ifconfig vlan2 up
-```
-Note: Here the assumption is, we have created 2 overlay VM's and creating 2 port representers for those VM's.
-Port representer should always be in the format: `lowercase string 'vlan'+'vlanID'`
-
-#### 6. Create intergation bridge and add ports to the bridge
-
-Create OvS bridge, VxLAN tunnel and assign ports to the bridge.
-```
-ovs-vsctl add-br br-int
-ifconfig br-int up
-
-ovs-vsctl add-port br-int vlan1
-ovs-vsctl add-port br-int vlan2
-ifconfig vlan1 up
-ifconfig vlan2 up
-
-ovs-vsctl add-port br-int vxlan1 -- set interface vxlan1 type=vxlan options:local_ip=40.1.1.1 options:remote_ip=40.1.1.2 options:dst_port=4789
-```
-Note: Here we are creating VxLAN tunnel with VNI 0, user can create any VNI for tunneling.
-
-#### 7. Configure rules for overlay control packets
-
-Configure rules to send overlay control packets from a VM to its respective port representers.
-
-Below configuration assumes
-- Overlay VF1 has a VSI value 14
-- Overlay VF2 has a VSI value 15
-
-These VSI values can be checked with `/usr/bin/cli_client -q -c` command on IMC. This command provides VSI ID, Vport ID, and corresponding MAC addresses for all
-- IDPF netdevs on ACC
-- VF's on HOST
-- IDPF netdevs on HOST (if IDPF driver loaded by user on HOST)
-- Netdevs on IMC
-```
-
-# Rules for control packets coming from overlay VF(VSI-14), IPU will add a VLAN tag 1 and send to HOST1(VSI-8)
-
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_from_host_to_ovs_and_ovs_to_wire_table "vmeta.common.vsi=14,user_meta.cmeta.bit32_zeros=0,action=linux_networking_control.add_vlan_and_send_to_port(1,24)"
- p4rt-ctl add-entry br0 linux_networking_control.handle_rx_loopback_from_host_to_ovs_table "vmeta.common.vsi=14,user_meta.cmeta.bit32_zeros=0,action=linux_networking_control.set_dest(24)"
- p4rt-ctl add-entry br0 linux_networking_control.vlan_push_mod_table "vmeta.common.mod_blob_ptr=1,action=linux_networking_control.vlan_push(1,0,1)"
-
-# Rules for control packets coming from overlay VF(VSI-15), IPU will add a VLAN tag 2 and send to HOST1(VSI-8)
-
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_from_host_to_ovs_and_ovs_to_wire_table "vmeta.common.vsi=15,user_meta.cmeta.bit32_zeros=0,action=linux_networking_control.add_vlan_and_send_to_port(2,24)"
- p4rt-ctl add-entry br0 linux_networking_control.handle_rx_loopback_from_host_to_ovs_table "vmeta.common.vsi=15,user_meta.cmeta.bit32_zeros=0,action=linux_networking_control.set_dest(24)"
- p4rt-ctl add-entry br0 linux_networking_control.vlan_push_mod_table "vmeta.common.mod_blob_ptr=2,action=linux_networking_control.vlan_push(1,0,2)"
-
-# Rules for control packets coming from HOST1(VSI-8), IPU will remove the VLAN tag 1 and send to overlay VF(VSI-14)
-
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_from_ovs_to_host_table "vmeta.common.vsi=8,hdrs.dot1q_tag[vmeta.common.depth].hdr.vid=1,action=linux_networking_control.remove_vlan_and_send_to_port(1,30)"
- p4rt-ctl add-entry br0 linux_networking_control.handle_rx_loopback_from_ovs_to_host_table "vmeta.misc_internal.vm_to_vm_or_port_to_port[27:17]=14,user_meta.cmeta.bit32_zeros=0,action=linux_networking_control.set_dest(30)"
- p4rt-ctl add-entry br0 linux_networking_control.vlan_pop_mod_table "vmeta.common.mod_blob_ptr=1,action=linux_networking_control.vlan_pop"
-
-# Rules for control packets coming from HOST1(VSI-8), IPU will remove the VLAN tag 2 and send to overlay VF(VSI-15)
-
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_from_ovs_to_host_table "vmeta.common.vsi=8,hdrs.dot1q_tag[vmeta.common.depth].hdr.vid=2,action=linux_networking_control.remove_vlan_and_send_to_port(2,31)"
- p4rt-ctl add-entry br0 linux_networking_control.handle_rx_loopback_from_ovs_to_host_table "vmeta.misc_internal.vm_to_vm_or_port_to_port[27:17]=15,user_meta.cmeta.bit32_zeros=0,action=linux_networking_control.set_dest(31)"
- p4rt-ctl add-entry br0 linux_networking_control.vlan_pop_mod_table "vmeta.common.mod_blob_ptr=2,action=linux_networking_control.vlan_pop"
-```
-
-#### 8. Configure rules for underlay control packets
-
-Configure rules to send underlay control packets from IDPF netdev to physical port.
-
-Below configuration assumes
-- Underlay IDPF netdev has a VSI value 10
-- First physical port will have a port ID of 0
-
-```
-# Configuration for control packets between physical port 0 to underlay IDPF netdev VSI-10
- p4rt-ctl add-entry br0 linux_networking_control.handle_rx_from_wire_to_ovs_table "vmeta.common.port_id=0,user_meta.cmeta.bit32_zeros=0,action=linux_networking_control.set_dest(26)"
-
-# Configuration for control packets between underlay IDPF netdev VSI-10 to physical port 0
- p4rt-ctl add-entry br0 linux_networking_control.handle_tx_from_host_to_ovs_and_ovs_to_wire_table "vmeta.common.vsi=10,user_meta.cmeta.bit32_zeros=0,action=linux_networking_control.set_dest(0)"
-```
-
-#### 10. Underlay configuration
-
-Configure underlay IP addresses, and add static routes.
-
-Below configuration assumes
-- Underlay IDPF netdev has a VSI value 10
-
-```
-p4rt-ctl add-entry br0 linux_networking_control.ecmp_lpm_root_lut "user_meta.cmeta.bit32_zeros=4/255.255.255.255,priority=2,action=linux_networking_control.ecmp_lpm_root_lut_action(0)"
-
-nmcli device set managed no
-ifconfig 40.1.1.1/24 up
-ip route show
-ip route change 40.1.1.0/24 via 40.1.1.2 dev
-```
-
-#### 11. Test the ping scenarios
-
- - Ping between VM's on the same host
- - Underlay ping
- - Overlay ping: Ping between VM's on different hosts
-
-## Limitations
-
-Current Linux Networking support for the networking recipe has following limitations:
-- All VLAN interfaces created on top of IDPF netdev, should always be in lowercase format "vlan+vlan_id"
-Ex: vlan1, vlan2, vlan3 ... vlan4094.
-- Set the pipeline before adding br-int port, vxlan0 port, and adding vlan ports to br-int bridge.
-- VxLAN destination port should always be standard port. i.e., 4789. (limitation by p4 parser)
-- We do not support any ofproto rules that would prevent FDB learning on OvS.
-- VLAN Tagged packets are not supported.
-- For VxLAN tunneled packets only IPv4-in-IPv4 is supported.
diff --git a/p4src/linux_networking/es2k/topology_linux_networking.PNG b/p4src/linux_networking/es2k/topology_linux_networking.PNG
deleted file mode 100644
index 4a952241..00000000
Binary files a/p4src/linux_networking/es2k/topology_linux_networking.PNG and /dev/null differ