This guide describes the setup configuration of DPDK VHost feature on Ubuntu 16.04 and steps required to validate VHost statistics. The DPDK > 16.11 version is required to be able to test VHost statistics.
The host configuration consists of DPDK and QEMU setup.
Note: Detailed instructions of building DPDK sources are described in DPDK guide [1].
wget http://fast.dpdk.org/rel/dpdk-16.11.tar.xz tar -xf dpdk-16.11.tar.xz cd dpdk-16.11 make config T=x86_64-native-linuxapp-gcc make -j4
Enable hugepages support in the Linux kernel. To do this, modify grub linux command line (/etc/default/grub) to contain the following options:
GRUB_CMDLINE_LINUX="... hugepagesz=2M hugepages=4096"
Update the grub config and restart the system:
grub-mkconfig -o /boot/grub/grub.cfg reboot
Load required modules to bind DPDK ports:
modprobe uio find . -name igb_uio.ko insmod ./build/kmod/igb_uio.ko
Bind DPDK ports:
./tools/dpdk-devbind.py --status ./tools/dpdk-devbind.py --bind=igb_uio 0000:02:00.0 ./tools/dpdk-devbind.py --bind=igb_uio 0000:02:00.1
Run testpmd application with Vhost support:
./build/app/testpmd -c f -n 4 --socket-mem 1024 --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1,client=1' -- -i # # Start packet forwarding # testpmd> start
Create an QEMU image for the guest VM:
qemu-img create -f qcow /var/lib/libvirt/images/ubuntu.qcow 5G
Install Ubuntu 16.04 on guest machine:
wget http://releases.ubuntu.com/16.04/ubuntu-16.04.1-server-amd64.iso qemu-system-x86_64 -machine accel=kvm -name Ubuntu -cdrom ubuntu-16.04.1-server-amd64.iso \ -boot d -hda /var/lib/libvirt/images/ubuntu.qcow -m 1024 -vnc :1
Establish VNC session (e.g.: VNC viewer) to the host device and follow regular procedure to install Ubuntu 16.04. Once, the installation is finished, stop the VM and boot from QEMU image.
Run Ubuntu 16.04 guest with vhost support:
qemu-system-x86_64 -machine accel=kvm -cpu host -m 1024 -smp 2,sockets=2,cores=1,threads=1 \ -drive file=/var/lib/libvirt/images/ubuntu.qcow,format=raw,if=none,id=drive-ide0-0-0 \ -device ide-hd,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -vnc 127.0.0.1:1 \ -object memory-backend-file,id=mem,size=1073741824,mem-path=/dev/hugepages,share=on \ -mem-prealloc -numa node,memdev=mem -chardev socket,id=charnet1,path=/tmp/sock0,server \ -netdev type=vhost-user,id=hostnet1,chardev=charnet1 -device virtio-net-pci,netdev=hostnet1,id=net1
Establish VNC session (e.g.: VNC viewer) to the host device to connect to the VM.
Note: Detailed description of each of the QEMU options can be found in the manual man qemu-doc.
Note: To run the VM under virsh, see VHost VM integration with libvirt for more details.
Once, the VM is up and running, the VHost test can be performed. Before doing farther steps, please make sure that Virtio network device is present on the quest machine. It can be done by checking driver name of each of Linux interfaces on the guest system (the driver name should be virtio_net):
for port in `ip link show | sed -n 's/^[0-9]\{1,\}:[ ]*\([a-zA-Z0-9_]*\):.*/\1/p'`; \ do ethtool -i $port 2>/dev/null | grep virtio_net; done # the output of the command should look like this: driver: virtio_net
The procedure of building DPDK should be the same as done in Build DPDK from sources section.
Enable hugepages support in the Linux kernel. To do this, modify grub linux command line (/etc/default/grub) to contain the following options:
GRUB_CMDLINE_LINUX="... hugepagesz=2M hugepages=256"
Update the grub config and restart the system:
sudo grub-mkconfig -o /boot/grub/grub.cfg sudo reboot
Load required modules to bind DPDK ports:
sudo modprobe uio find . -name igb_uio.ko sudo insmod ./build/kmod/igb_uio.ko
Bind DPDK ports (Virtio network device):
./tools/dpdk-devbind.py --status ... 0000:00:05.0 'Virtio network device' if=ens5 drv=virtio-pci unused=igb_uio ... sudo ./tools/dpdk-devbind.py --bind=igb_uio 0000:00:05.0
Note: if you see error like this:
-bash: ./tools/dpdk-devbind.py: /usr/bin/python: bad interpreter: No such file or directory
please fix dpdk-devbind.py script to use python3.
Run testpmd application:
sudo ./build/app/testpmd -c 3 -- -i # # Configure testpmd to send 32(default) pkts of lenght 70 # testpmd> set fwd rxonly testpmd> set txpkts 70 testpmd> start tx_first
Check statistics of VHost port on the host. Usually, this is port 0 in testpmd. The number of packet received on the port should be 32 (same as sent from VM).
testpmd> show port xstats 0
###### NIC extended statistics for port 0
rx_good_packets: 32
tx_good_packets: 0
rx_good_bytes: 2240
tx_good_bytes: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0packets: 32
rx_q0bytes: 2240
rx_q0errors: 0
tx_q0packets: 0
tx_q0bytes: 0
rx_good_packets: 32
rx_total_bytes: 2240
rx_missed_pkts: 0
rx_broadcast_packets: 0
rx_multicast_packets: 0
rx_unicast_packets: 32
rx_undersize_packets: 0
rx_size_64_packets: 0
rx_size_65_to_127_packets: 32
rx_size_128_to_255_packets: 0
rx_size_256_to_511_packets: 0
rx_size_512_to_1023_packets: 0
rx_size_1024_to_1522_packets: 0
rx_size_1523_to_max_packets: 0
rx_errors_with_bad_CRC: 0
rx_fragmented_errors: 0
rx_jabber_errors: 0
rx_unknown_protos_packets: 0
tx_good_packets: 0
tx_total_bytes: 0
tx_missed_pkts: 0
tx_broadcast_packets: 0
tx_multicast_packets: 0
tx_unicast_packets: 0
tx_undersize_packets: 0
tx_size_64_packets: 0
tx_size_65_to_127_packets: 0
tx_size_128_to_255_packets: 0
tx_size_256_to_511_packets: 0
tx_size_512_to_1023_packets: 0
tx_size_1024_to_1522_packets: 0
tx_size_1523_to_max_packets: 0
tx_errors_with_bad_CRC: 0
To run QEMU VHost VM using libvirt the following XML domain description is required:
<domain type='kvm'> <name>VHost</name> <memory unit='KiB'>1048576</memory> <memoryBacking> <hugepages> <page size='2048' unit='KiB'/> </hugepages> </memoryBacking> <vcpu placement='static'>2</vcpu> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='host-passthrough'> <numa> <cell id='0' cpus='0-1' memory='1048576' unit='KiB' memAccess='shared'/> </numa> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm-spice</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/ubuntu.qcow'/> <target dev='hda' bus='ide'/> </disk> <interface type='network'> <source network='default'/> <model type='rtl8139'/> </interface> <interface type='vhostuser'> <source type='unix' path='/tmp/sock0' mode='server'/> <model type='virtio'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <video> <model type='cirrus' vram='16384' heads='1'/> </video> </devices> </domain>
To run the VM using virsh, perform the following commands on a host:
# store the VM description vim /etc/libvirt/qemu/VHost.xml # define and run the VHost VM virsh define /etc/libvirt/qemu/VHost.xml virsh start VHost