Reputation: 11
I'm using Mellanox connectX-5 100G NIC to run my application with IPV4 jumbo frames 9000Bytes. I'm able to send jumbo frame packets and not able to receive the jumbo frame packets at the RX side. I'm using DPDK 21.11.3.
This is my ethtool -i NIC
command output:
driver: mlx5_core
version: 5.4-3.4.0
firmware-version: 16.33.1048 (NXE0000000005)
expansion-rom-version:
bus-info: 0000:65:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: yes
This is my rx and tx offload capabilites,
const struct rte_eth_conf default_port_conf = {
.rxmode = {
.split_hdr_size = 0,
.max_lro_pkt_size = 9614, //this is for DPDK-21.11.3
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_MULTI_SEGS),
},
};
This is the command I used to run my application
./application -c 0xfffefffe -n 8 -a 0000:65:00.0,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9,txq_inline_mpw=128 -a 0000:65:00.1,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9,txq_inline_mpw=128 -a 0000:66:00.0,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9,txq_inline_mpw=128 -a 0000:66:00.1,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=9,txq_inline_mpw=128 -d librte_mempool_ring.so -d librte_net_mlx5.so -- -m [2:3].0,[4:5].1,[6:7].2,[8:9].3,
when I run this command I'm seeing this log in the console
mlx5_net: Port 0 MPRQ is requested but cannot be enabled
(requested: pkt_sz = 1646, desc_num = 256, rxq_num = 1, stride_sz = 1, stride_num = 512
supported: min_rxqs_num = 1, min_buf_wqe_sz = 16384 min_stride_sz = 64, max_stride_sz = 8192).
Rx segment is not enable.
I followed this reference link but facing same issue(not able to receive jumbo frames on RX side),
ConnectX-5 DPDK performance degradation when enabling jumbo, scatter and multi segs
After changing mprq_log_stride_num=9
to mprq_log_stride_num=7
I'm NOT seeing below log msg, meaning now driver is supporting MPRQ request
mlx5_net: Port 0 MPRQ is requested but cannot be enabled
(requested: pkt_sz = 1646, desc_num = 256, rxq_num = 1, stride_sz = 1, stride_num = 512
supported: min_rxqs_num = 1, min_buf_wqe_sz = 16384 min_stride_sz = 64, max_stride_sz = 8192).
Rx segment is not enable.
These are the test cases I just verified by running the application,
Testcase 1
./application -c 0xfffefffe -n 8 -a 0000:65:00.0,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=7,txq_inline_mpw=128 -a 0000:65:00.1,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=7,txq_inline_mpw=128 -a 0000:66:00.0,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=7,txq_inline_mpw=128 -a 0000:66:00.1,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=7,txq_inline_mpw=128 -d librte_mempool_ring.so -d librte_net_mlx5.so -- -m [2:3].0,[4:5].1,[6:7].2,[8:9].3,
I also verified adding mprq_log_stride_size=11
(for all devices listed, In my case 4 ports),
Testcase 2
./application -c 0xfffefffe -n 8 -a 0000:65:00.0,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=7,txq_inline_mpw=128,mprq_log_stride_size=11 -a 0000:65:00.1,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=7,txq_inline_mpw=128,mprq_log_stride_size=11 -a 0000:66:00.0,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=7,txq_inline_mpw=128,mprq_log_stride_size=11 -a 0000:66:00.1,mprq_en=1,rxqs_min_mprq=1,mprq_log_stride_num=7,txq_inline_mpw=128,mprq_log_stride_size=11 -d librte_mempool_ring.so -d librte_net_mlx5.so -- -m [2:3].0,[4:5].1,[6:7].2,[8:9].3,
It was my port config issue, After adding .offloads = (RTE_ETH_RX_OFFLOAD_SCATTER),
in the rx mode offload I'm able to receive Jumbo frames on the RX side,
Here is the command used to run the application and port configuration,
./application -c 0xfffefffe -n 8 -a 0000:65:00.0,mprq_en=1,rxqs_min_mprq=1 -a 0000:65:00.1,mprq_en=1,rxqs_min_mprq=1 -a 0000:66:00.0,mprq_en=1,rxqs_min_mprq=1 -a 0000:66:00.1,mprq_en=1,rxqs_min_mprq=1 -d librte_mempool_ring.so -d librte_net_mlx5.so -- -m [2:3].0,[4:5].1,[6:7].2,[8:9].3,
const struct rte_eth_conf default_port_conf = {
.rxmode = {
.split_hdr_size = 0,
.max_lro_pkt_size = 9614, //this is for DPDK-21.11.3
.offloads = (RTE_ETH_RX_OFFLOAD_SCATTER),
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_MULTI_SEGS),
},
};
Thanks a lot for a quick response, I could able to fix the issue.
Upvotes: 0
Views: 424
Reputation: 366
The application should explicitly set MTU by means of rte_eth_dev_set_mtu
invocation. That should be done right after configuring the adapter with rte_eth_dev_configure
.
For the vendor in question, rx_burst_vec
is enabled by default, which, according to documentation, prevents the use of large MTU. In order to enable large MTU support, one should pass device argument rx_vec_en=0
(for example, -a 0000:65:00.0,rx_vec_en=0
).
UPDATE 1
One needs to pass rx_vec_en=0
for every device in the application command line, not just for 0000:65:00.0
. The argument should be passed via the same -a
options that are already present in the OP and not via additional ones.
UPDATE 2
Since the OP requests MPRQ, support for large MTU should be available without the need to specify rx_vec_en=0
. However, according to the log, the driver does not enable it.
As it follows from the comment in the driver, the number of Rx queue descriptors has to be greater than the number of strides. The number of strides in the OP is 512
, whilst the number of descriptors is 256
. In order to fix that, one has to either change mprq_log_stride_num=9
to mprq_log_stride_num=7
(for all devices listed) or modify the application to request 1024
descriptors per Rx queue.
Additionally, in order to make sure stride size is correct, one may try to explicitly pass argument mprq_log_stride_size=11
(for all devices listed).
UPDATE 3
Since the target MTU value is used for driver-internal calculations during Rx queue setup, one has to make sure that rte_eth_dev_set_mtu
is done beforehand (that is, after rte_eth_dev_configure
).
UPDATE 4
Theoretically, in order for .max_lro_pkt_size = 9614
to work, the application also has to indicate .offloads = RTE_ETH_RX_OFFLOAD_TCP_LRO
. That being said, TCP LRO seems to be rather unrelated to the task of receiving IPv4 jumbo frames. Instead, one may consider increasing the mbuf size of the mempool used for the Rx queue so that it is large enough compared with MTU.
Alternatively, one should be able to use the "scatter" feature: .offloads = RTE_ETH_RX_OFFLOAD_SCATTER
. In this case, mbuf size should be a don't care.
Upvotes: 1