Anuket Project

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Introduction

As VSPREF methods are applied to systems in Edge Cloud or Edge Compute use cases, we will begin to see overlap with methods that have been employed for Internet Access Speed and Performance testing. MANY methods have been used for Internet performance measurement, and there are some general approaches which were successful when access speeds were <50Mbps. But no widely accepted Standard for metrics and methods has emerged, and now the challenge for Internet methods is to make measurements when access speeds top >900Mbps, and Latency is also a critical benchmark for Edge technologies (5G and its Applications). VSPERF methods routinely work at very high speed and low latency. It is worthwhile for us to compare VSPERF methods and those that use TCP or state-full traffic with multiple simultaneous connections (to account for TCP flow control), because this comparison will be inevitably occur in deployed systems, and made by others without our benchmarking background. It is also valuable to calibrate our current methods against known capacity limits, which we can implement using tc qdisc on non-DPDK interfaces.

Development and Configuration

In order to use some different, non-DPDK test tools with Sender and Receiver on a single Node, we determined a Network Namespace Configuration that isolates Client and Server processes and assigns a single external interface NIC port for their use. This is necessary to prevent the host recognizing that both interface ports are local, and routing traffic through the kernel instead of the external ports. The Figure below illustrates the configuration used between Nodes 4 and 5 on Intel Pod 12.

Initial results for tests with this configuration (using the iPerf 2 tool) indicate that UDP testing is a more reliable assessment of the calibrated bit rate (after correcting for ETH, IP, and UDP header overhead, the values reported below are for protocol payloads of a single UDP stream or 3 TCP connections).

The value plotted for 3 TCP connections at 10000 Mbps uses no tc qdisc, just the 10GE links. A similar value was obtained for 5 TCP connections.


  • No labels