Reputation: 31
My application receives data from NFS client to NFS server (User space NFS server - NFS Ganesha) and once the packets are received in server, the application starts processing the packet and send out.
I am new to DPDK and I'm analyzing the features in it to understand and adapt to my application in order to accelerate the performance by avoiding some data copy from/to kernel/user space.
I found KNI to be useful and after starting KNI sample application, I have seen the following output. Also I am able to see new interfaces vEth0_0 and vEth1_0. But I am not even able to perform ping operation to these interfaces after assigning IP's.
$$ ./examples/kni/build/kni -n 4 -c 0xf0 -- -P -p 0x3 --config="(0,4,6,8),(1,5,7,9)"
*Checking link status
.done
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
APP: Lcore 5 is reading from port 1
APP: Lcore 6 is writing to port 0
APP: Lcore 7 is writing to port 1
APP: Lcore 4 is reading from port 0
APP: Configure network interface of 0 up
PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 32.
APP: Configure network interface of 1 up
PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 32.*
So my question is what is the expected output of KNI sample application in DPDK? And how can I use for my application? (Am I able to do operations with vEth0_0 interface, so that I can avoid multiple kernel/userspace copy)
Update: Above issue resolved in host machine by setting correct GRUB options as iommu=pt, intel_iommu=on
Question 2: How to use KNI inside a VM? KNI bring-up inside the VM has issues.
KNI: /dev/kni opened
KNI: Creating kni...
KNI: tx_phys: 0x000000017a6af140, tx_q addr: 0xffff88017a6af140
KNI: rx_phys: 0x000000017a6b1180, rx_q addr: 0xffff88017a6b1180
KNI: alloc_phys: 0x000000017a6b31c0, alloc_q addr: 0xffff88017a6b31c0
KNI: free_phys: 0x000000017a6b5200, free_q addr: 0xffff88017a6b5200
KNI: req_phys: 0x000000017a6b7240, req_q addr: 0xffff88017a6b7240
KNI: resp_phys: 0x000000017a6b9280, resp_q addr: 0xffff88017a6b9280
KNI: mbuf_phys: 0x0000000187c00000, mbuf_kva: 0xffff880187c00000
KNI: mbuf_va: 0x00002aaa32800000
KNI: mbuf_size: 2048
KNI: pci_bus: 00:03:00
KNI: Error: Device not supported by ethtool
Upvotes: 1
Views: 4054
Reputation: 31
The issue in KNI application is that, it won't show errors directly if there are issues with DMA mapping. "dmesg" in host system shows that the DMA mapping had issues with the network device.
Later I found that setting the kernel option "iommu=pt, intel_iommu=on" is needed for igb_uio driver. so after setting this in /etc/default/grub file and doing "update-grub" I am able to bring up the KNI interface with igb_uio driver and the network stack is working fine.
Still KNI inside Guest VM is not working. Checking on this.
Upvotes: 2