|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * telco_ref_design_specs/ran/telco-ran-ref-design-spec.adoc |
| 4 | + |
| 5 | +:_mod-docs-content-type: CONCEPT |
| 6 | +[id="telco-core-whats-new-ref-design_{context}""] |
| 7 | += {product-title} {product-version} features for {rds} |
| 8 | + |
| 9 | +The following features that are included in {product-title} {product-version} and are leveraged by the {rds} reference design specification (RDS) have been added or updated. |
| 10 | + |
| 11 | +.New features for {rds} in {product-title} {product-version} |
| 12 | +[cols="1,3", options="header"] |
| 13 | +|==== |
| 14 | +|Feature |
| 15 | +|Description |
| 16 | + |
| 17 | +//CNF-7349 Rootless DPDK pods |
| 18 | +|Support for running rootless Data Plane Development Kit (DPDK) workloads with kernel access by using the TAP CNI plugin |
| 19 | +a|DPDK applications that inject traffic into the kernel can run in non-privileged pods with the help of the TAP CNI plugin. |
| 20 | + |
| 21 | +* link:https://docs.openshift.com/container-platform/4.14/networking/hardware_networks/using-dpdk-and-rdma.html#nw-running-dpdk-rootless-tap_using-dpdk-and-rdma[Using the TAP CNI to run a rootless DPDK workload with kernel access] |
| 22 | +
|
| 23 | +//CNF-5977 Better pinning of the networking stack |
| 24 | +|Dynamic use of non-reserved CPUs for OVS |
| 25 | +a|With this release, the Open vSwitch (OVS) networking stack can dynamically use non-reserved CPUs. |
| 26 | +The dynamic use of non-reserved CPUs occurs by default in performance-tuned clusters with a CPU manager policy set to `static`. |
| 27 | +The dynamic use of available, non-reserved CPUs maximizes compute resources for OVS and minimizes network latency for workloads during periods of high demand. |
| 28 | +OVS cannot use isolated CPUs assigned to containers in `Guaranteed` QoS pods. This separation avoids disruption to critical application workloads. |
| 29 | + |
| 30 | +//CNF-7760 |
| 31 | +|Enabling more control over the C-states for each pod |
| 32 | +a|The `PerformanceProfile` supports `perPodPowerManagement` which provides more control over the C-states for pods. Now, instead of disabling C-states completely, you can specify a maximum latency in microseconds for C-states. You configure this option in the `cpu-c-states.crio.io` annotation, which helps to optimize power savings for high-priority applications by enabling some of the shallower C-states instead of disabling them completely. |
| 33 | + |
| 34 | +* link:https://docs.openshift.com/container-platform/4.14/scalability_and_performance/cnf-low-latency-tuning.html#node-tuning-operator-pod-power-saving-config_cnf-master[Optional: Power saving configurations] |
| 35 | +
|
| 36 | +//CNF-7741 Permit to disable NUMA Aware scheduling hints based on SR-IOV VFs |
| 37 | +|Exclude SR-IOV network topology for NUMA-aware scheduling |
| 38 | +a|You can exclude advertising Non-Uniform Memory Access (NUMA) nodes for the SR-IOV network to the Topology Manager. By not advertising NUMA nodes for the SR-IOV network, you can permit more flexible SR-IOV network deployments during NUMA-aware pod scheduling. |
| 39 | + |
| 40 | +For example, in some scenarios, you want flexibility for how a pod is deployed. By not providing a NUMA node hint to the Topology Manager for the pod's SR-IOV network resource, the Topology Manager can deploy the SR-IOV network resource and the pod CPU and memory resources to different NUMA nodes. In previous {product-title} releases, the Topology Manager attempted to place all resources on the same NUMA node. |
| 41 | + |
| 42 | +* link:https://docs.openshift.com/container-platform/4.14/networking/hardware_networks/configuring-sriov-device.html#nw-sriov-exclude-topology-manager_configuring-sriov-device[Exclude the SR-IOV network topology for NUMA-aware scheduling] |
| 43 | +
|
| 44 | +//CNF-8035 MetalLB VRF Egress interface selection with VRFs (Tech Preview) |
| 45 | +|Egress service resource to manage egress traffic for pods behind a load balancer (Technology Preview) |
| 46 | +a|With this update, you can use an `EgressService` custom resource (CR) to manage egress traffic for pods behind a load balancer service. |
| 47 | + |
| 48 | +You can use the `EgressService` CR to manage egress traffic in the following ways: |
| 49 | + |
| 50 | +* Assign the load balancer service's IP address as the source IP address of egress traffic for pods behind the load balancer service. |
| 51 | +
|
| 52 | +* Configure the egress traffic for pods behind a load balancer to a different network than the default node network. |
| 53 | +
|
| 54 | +* link:https://docs.openshift.com/container-platform/4.14/networking/ovn_kubernetes_network_provider/configuring-egress-traffic-for-vrf-loadbalancer-services.html#configuring-egress-traffic-loadbalancer-services[Configuring an egress service] |
| 55 | +
|
| 56 | +|==== |
0 commit comments