Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Net-ns config added

...

View file
nameNetNamespace_N5_N4.pdf
height400

Here is the working net-namespace configuration, and the lines to delete once done:

# Node 5 root
ip netns list
# Working NetNS config:
ip netns add iperf-server
ip netns add iperf-client
ip link set ens801f1 netns iperf-server
ip link set ens801f0 netns iperf-client
ip netns exec iperf-server ip link set dev lo up
ip netns exec iperf-client ip link set dev lo up
ip netns exec iperf-server ip addr add dev ens801f1 10.10.122.25/24
ip netns exec iperf-client ip addr add dev ens801f0 10.10.124.25/24
ip netns exec iperf-client ip link set dev ens801f0 up
ip netns exec iperf-server ip link set dev ens801f1 up
ip netns exec iperf-server route add default gw 10.10.122.25
ip netns exec iperf-client route add default gw 10.10.124.25

# Delete

ip netns delete iperf-server
ip netns delete iperf-client


Initial results for tests with this configuration (using the iPerf 2 tool) indicate that UDP testing is a more reliable assessment of the calibrated bit rate (after correcting for ETH, IP, and UDP header overhead, the values reported below are for protocol payloads of a single UDP stream or 3 TCP connections).

Image RemovedImage Added

The value plotted for 3 TCP connections at 10000 Mbps uses no tc qdisc, just the 10GE links. A similar value was obtained for 5 TCP connections.

Recent testing (January 24 and 25), has produced two important findings:

  1. iPerf 2 has an unexpected dependency on the Units in the UDP -b Bandwidth #[KM]  option configuration on the Client.
    1. the -b 972000000  and -b 972M options produce different sending rates.  -b 972000000 seems to be correctly sending the target bandwidth.
  2. Neither iPerf 2 or iPerf3 can measure one-way and RT delay, and delay is a both critical measurement metric and a demanding requirement of most Edge Applications.
    1. the measurement tool "netprobe" can measure both loss and delay, and can supply measurements on every packet received. netprobe architecture is similar to iPerf, in that individual test streams can be launched from the Client to the Server (running as a daemon, if desired). netprobe is installed on Node 5 /home/opnfv/netprobe/netprobe2

Here are some windows with netprobe in operation, showing results:

Image Added

We have also conducted more tests investigating the effects of RT Delay on Measurement (an impairment which should only affect TCP, of course). This cross-section was conducted with a 1Gbps rate limit implemented in a Token bucket filter on the path to the iPerf 2 Server.

Image Added

Tests with concurrent iPerf 2 and Netprobe test streams have revealed the expected outcomes:

  1. Parallel TCP stream measurements are very sensitive to the background traffic, with the peak capacity only reached after 10 seconds, and only then account for the CBR Netprobe rate.
  2. UDP stream measurements indicate reduced rate equivalent to the CBR Netprobe stream.
  3. One-way Delay measurements possible with Netprobe confirm the delay introduced by the Token bucket filter (4ms max), and illustrate the delay variability present when 3 TCP streams compete for capacity.