Reputation: 21
I use testpmd in QEMU to test the VF performance of CX6 NIC (100Gbps) and run pktgen on another host to send data to it, but the pps measured by testpmd always maintains a relatively low and fixed value (0.9Mpps) no matter what data length I send (64B to 1518B) and this value is much lower than expected (8Mpps ~ 148Mpps).
This is my QEMU and testpmd command. I tried to modify the number of cores and queues used by testpmd, but it didn't work too.
sudo $qemu_path/qemu-system-x86_64 -cpu host \
-hda ${img}/redhat-9-3.img \
-m 4G -enable-kvm -smp 8 -boot c \
-object memory-backend-file,id=mem1,size=4096M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem1 \
-netdev user,id=ur0 \
-device virtio-net-pci,netdev=ur0,addr=11 \
-chardev socket,id=char0,path=/tmp/vdpa-socket0 \
-netdev type=vhost-user,id=mynet0,chardev=char0 \
-device virtio-net-pci,netdev=mynet0 \
-monitor telnet::52000,server,nowait \
-vnc 0.0.0.0:11 &
sudo ./dpdk-testpmd -l 0-4 -n 4 -a 00:03.0 -- --nb-cores=1 --burst=64 -i --rxq=1 --txq=1 --rxd=1024 --txd=1024
This is some data from the testpmd test.
64B:
Throughput (since last show)
Rx-pps: 922446 Rx-bps: 442774552
Rx-pps: 922446 Rx-bps: 442774552
128B:
Throughput (since last show)
Rx-pps: 900002 Rx-bps: 892802528
Rx-pps: 900011 Rx-bps: 892802528
256B:
Throughput (since last show)
Rx-pps: 853477 Rx-bps: 1720610528
Rx-pps: 853477 Rx-bps: 1720610528
Additionally, when use iperf3, I can achieve relatively high speeds.
[ ID] Interval Transfer Bitrate
[ 5] 0.00-20.04 sec 16.2 GBytes 6.94 Gbits/sec receiver
[ 8] 0.00-20.04 sec 16.5 GBytes 7.08 Gbits/sec receiver
[ 10] 0.00-20.04 sec 14.4 GBytes 6.16 Gbits/sec receiver
[ 12] 0.00-20.04 sec 14.9 GBytes 6.37 Gbits/sec receiver
[SUM] 0.00-20.04 sec 61.9 GBytes 26.5 Gbits/sec receiver
-----------------------------------------------------------
Upvotes: 1
Views: 41