Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Summary of results

...

V2V Scenarios

Summary of V2V Scenarios

Scenarios

Possible Core-allocations:
Assumptions: Numa-0 (0-21) Numa-1 (22-43)

vSwitch Core #: 02
TGen Ports Info
1PMDs: 4, 5 (0x30)2 Virtual Ports 10G
2PMDs: 22, 23 (0xC00000)2 Virtual Ports 10G
3PMDs: 4, 22 (0x400010)2 Virtual Ports 10G



P2P Scenarios

Summary of P2P Scenarios:

...

V2V Scenarios OVS_PMD and interfaces (virtual) mappings

ScenariosMappings
Virtual Interfaces

Bridge trex_br
Port trex_br
Interface trex_br
type: internal
Port "dpdkvhostuser3"
Interface "dpdkvhostuser3"
type: dpdkvhostuser
Port "dpdkvhostuser2"
Interface "dpdkvhostuser2"
type: dpdkvhostuser
Bridge "int_br0"
Port "dpdkvhostuser0"
Interface "dpdkvhostuser0"
type: dpdkvhostuser
Port "dpdkvhostuser1"
Interface "dpdkvhostuser1"
type: dpdkvhostuser
Port "int_br0"
Interface "int_br0"
type: internal

Scenario-1

pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdkvhostuser1 queue-id: 0
port: dpdkvhostuser2 queue-id: 0
pmd thread numa_id 0 core_id 5:
isolated : false
port: dpdkvhostuser0 queue-id: 0
port: dpdkvhostuser3 queue-id: 0

Scenario-2

pmd thread numa_id 1 core_id 22:
isolated : false
port: dpdkvhostuser0 queue-id: 0
port: dpdkvhostuser3 queue-id: 0
pmd thread numa_id 1 core_id 23:
isolated : false
port: dpdkvhostuser1 queue-id: 0
port: dpdkvhostuser2 queue-id: 0

Scenario-3

pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdkvhostuser0 queue-id: 0
port: dpdkvhostuser1 queue-id: 0
port: dpdkvhostuser2 queue-id: 0
port: dpdkvhostuser3 queue-id: 0
pmd thread numa_id 1 core_id 22:
isolated : false


PVP Scenarios OVS-PMD and Interfaces (physical and virtual) mappings

ScenarioMappings
1/2/3

pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdkvhostuser1 queue-id: 0
pmd thread numa_id 0 core_id 5:
isolated : false
port: dpdk1 queue-id: 0
pmd thread numa_id 0 core_id 6:
isolated : false
port: dpdk0 queue-id: 0
pmd thread numa_id 0 core_id 7:
isolated : false
port: dpdkvhostuser0 queue-id: 0

4

pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdkvhostuser1 queue-id: 0
pmd thread numa_id 0 core_id 5:
isolated : false
port: dpdk0 queue-id: 0
port: dpdkvhostuser0 queue-id: 0
pmd thread numa_id 1 core_id 22:
isolated : false

pmd thread numa_id 1 core_id 23:
isolated : false
port: dpdk1 queue-id: 0

5

pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdk0 queue-id: 0
pmd thread numa_id 0 core_id 5:
isolated : false

pmd thread numa_id 1 core_id 22:
isolated : false
port: dpdkvhostuser1 queue-id: 0
pmd thread numa_id 1 core_id 23:
isolated : false
port: dpdk1 queue-id: 0
port: dpdkvhostuser0 queue-id: 0

6

pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdkvhostuser1 queue-id: 0
pmd thread numa_id 0 core_id 5:
isolated : false
port: dpdk0 queue-id: 0
port: dpdkvhostuser0 queue-id: 0
pmd thread numa_id 1 core_id 22:
isolated : false

pmd thread numa_id 1 core_id 23:
isolated : false
port: dpdk1 queue-id: 0

7/8/9

pmd thread numa_id 1 core_id 22:
isolated : false
port: dpdkvhostuser1 queue-id: 0
pmd thread numa_id 1 core_id 23:
isolated : false
port: dpdk0 queue-id: 0
pmd thread numa_id 1 core_id 24:
isolated : false
port: dpdkvhostuser0 queue-id: 0
pmd thread numa_id 1 core_id 25:
isolated : false
port: dpdk1 queue-id: 0


P2P Scenarios OVS-PMDs and Physical-Interface Mappings


ScenarioMappings
1

pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdk1 queue-id: 0
pmd thread numa_id 0 core_id 5:
isolated : false
port: dpdk0 queue-id: 0

2

pmd thread numa_id 1 core_id 22:
isolated : false
port: dpdk0 queue-id: 0
pmd thread numa_id 1 core_id 23:
isolated : false
port: dpdk1 queue-id: 0

3

pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdk0 queue-id: 0
port: dpdk1 queue-id: 0
pmd thread numa_id 1 core_id 22:
isolated : false

4

pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdk1 queue-id: 0
pmd thread numa_id 0 core_id 5:
isolated : false
port: dpdk0 queue-id: 0

5

pmd thread numa_id 1 core_id 22:
isolated : false
port: dpdk0 queue-id: 0
pmd thread numa_id 1 core_id 23:
isolated : false
port: dpdk1 queue-id: 0

6

pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdk0 queue-id: 0
pmd thread numa_id 1 core_id 22:
isolated : false
port: dpdk1 queue-id: 0

7

pmd thread numa_id 0 core_id 4:
isolated : false
port: dpdk1 queue-id: 0
pmd thread numa_id 0 core_id 5:
isolated : false
port: dpdk0 queue-id: 0

8

pmd thread numa_id 1 core_id 22:
isolated : false
port: dpdk0 queue-id: 0
pmd thread numa_id 1 core_id 23:
isolated : false
port: dpdk1 queue-id: 0

9

pmd thread numa_id 0 core_id 4:
isolated : false

pmd thread numa_id 1 core_id 22:
isolated : false
port: dpdk0 queue-id: 0
port: dpdk1 queue-id: 0



Possible Variations

  1. Increase the Number of CPUs to 4 for the VNF.
  2. Phy2phy case (no VNF).
  3. Try different forwarding VNF
  4. Different Virtual Switch (VPP)
  5. RxQ Affinity.


Summary of Key Results and Points of Learning

 

  1. Performance degradation due to Cross-NUMA Node instantiation of NIC, vSwitch, and VNF can vary from 50-60% for lower size packets (64, 128, 256) to under 0-20% for higher packet sizes ( > 256 bytes)
  2. The worst performance was observed with PVP setup and scenarios where all PMD cores and NICs are in same NUMA Node, but VNF cores are shared across NUMA Nodes. Hence, VNF cores are best allocated within the same NUMA Node. If the VIM prevents VNF instantiation across multiple NUMA Nodes then, this issue is effectively avoided.
  3. Any variations in CPU assignments under P2P setups has no effect on performance for packet sizes above 128 bytes. However, V2V setups show performance differences for larger packet sizes of 512 and 1024 bytes.
  4. Continuous traffic-tests and RFC2544 Throughput using Binary Search with Loss-Verification provides more consistent results across multiple-runs. Hence, these methods should be preferred over legacy RFC2544 methods using Binary search or Linear search algorithms.
  5. A single NUMA Node serving multiple interfaces is worse than Cross-NUMA Node performance degradation. Hence, it is better to avoid such configurations. For example, if both the physical NICs are assigned to NUMA-Node id 0 (with core ids 0-21), then the configuration-a below will lead to poorer performance than configuration-b

 

Configuration-a

pmd thread numa_id 0 core_id 4:

               isolated : false

               port: dpdk0         queue-id: 0

               port: dpdk1         queue-id: 0

pmd thread numa_id 1 core_id 22:

               isolated : false

 

Configuration-b

pmd thread numa_id 1 core_id 22:

               isolated : false

               port: dpdk0         queue-id: 0

pmd thread numa_id 1 core_id 23:

               isolated : false

               port: dpdk1         queue-id: 0

        6. The average latencies have exactly opposite patterns under PVP setups and scenarios for continuous traffic testing and RFC2544 throughput test (with search algorithm BSwLV). That is, average latency is lower for lower packet sizes for RFC2544 throughput test and higher for higher packet-sizes, and this trend is opposite for continuous traffic testing.

Note: For result 6, could this be the result of the continuous traffic testing filling all queues for the duration of the trial? The RFC 2544 Throughput methods (and those of the present document) allow the queues to empty and the DUT to stabilize between trials.


Notes on Documentation

  1. must view log files, qemu threads need to match the intended scenario for VM -
  2. Christian created qemu command (and documentation) - check this for VM mapping
  3. SR: CT's command is only the host
  4. qemu command line -smp 2 should do this - simulates two Numa Nodes  - need to see how the VM see it's architecture: numactl -h