hudac
hudac

Reputation: 2798

ConnectX-5 DPDK performance degradation when enabling jumbo, scatter and multi segs

I’m using a ConnectX-5 nic.
I have a DPDK application on which I want to support jumbo packets.
To do that I add rx offload capabilities: DEV_RX_OFFLOAD_JUMBO_FRAME, DEV_RX_OFFLOAD_SCATTER
And tx offload capabilities: DEV_TX_OFFLOAD_MULTI_SEGS
I also make the max_rx_pkt_len higher so it will accept jumbo packets (9k).
I’ve noticed that adding these offload capabilities + increasing the max_rx_pkt_len harms performance.
For example, without those offload flags, I can redirect 80Gbps packets of size 512 without any drop.
Using those flags reduce to ~55Gbps without losses.
I’m using DPDK 19.11.6.
Currently in this test I do not send jumbo packets. I just want to understand how it affects the average packet size.

Is this expected? Using those offload flags should degrade performance?
Thanks

-- Edit --

Port configuration:

const struct rte_eth_conf port_conf = {
        .rxmode = {
                .split_hdr_size = 0,
                .offloads = DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_SCATTER,
                .max_rx_pkt_len = 9614;
        },
        .txmode = {
                .mq_mode = ETH_MQ_TX_NONE,
                .offloads = DEV_TX_OFFLOAD_MULTI_SEGS
        },
        .intr_conf.lsc = 0
};

--- Further information --

# lspci | grep Mellanox
37:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
37:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]

# lspci -vv -s 37:00.0 | grep "Part number" -A 3
            [PN] Part number: P23842-001
            [EC] Engineering changes: A6
            [SN] Serial number: IL201202BS
            [V0] Vendor specific: PCIe EDR x16 25W

Upvotes: 1

Views: 1481

Answers (1)

Vipin Varghese
Vipin Varghese

Reputation: 4798

With DPDK 21.11.0 LTS using MLX5, I am able to send an receive 8000B at line rate using single RX queue with 1 CPU core. While for smaller packet size like 64B, 128B, 256B and 512B (with JUMBO Frame support enabled) I am reaching line rate with right configuration. Hence highly recommend to use dpdk 21.11 or use LTS drop as it contains fixes and updates of MLX5 NIC as it could be the DPDK 19.11.6 could have potential fixes missing (there are rework on mprq and vector code in mlx5).

enter image description here

Packet Throughput with 1 RX and 1 TX queue: enter image description here

Note: Using the following args I am only able RX and TX jumbo frames with MLX5 supported NIC mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9,txq_inline_mpw=128

Steps to follow:

  1. Download dpdk using wget http://fast.dpdk.org/rel/dpdk-21.11.tar.xz

  2. build dpdk using tar xvf dpdk-21.11.tar.xz; meson build, ninja -C build install; ldconfig

  3. Modify DPDK example code like l2fwd to support JUMBO frames (refer code below)

  4. enable MTU of minimum of 9000B on NIC (In case of MLX NIC, since the PMD does not own the nic and is used as representations, change MTU on netlink kernel interface with ipconfig or ip command also).

  5. use ixia, spirent, packeth, ostaniato, xiena or dpdk-pktgen to send JUMBO frame.

l2fwd modification to support JUMBO in DPDK 21.11.0:

 95 static struct rte_eth_conf port_conf = {
 96         .rxmode = {
 97         .max_lro_pkt_size = 9000,
 98         .split_hdr_size = 0,
 99         },
100         .txmode = {
101         .mq_mode = ETH_MQ_TX_NONE,
102         .offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
103                      DEV_TX_OFFLOAD_MULTI_SEGS),
104         },
105 };

Upvotes: 1

Related Questions