How to balance the load to each NICs queue in DPDK app

I have a porblem with the test-pmd app from DPDK. I running this in

Ubuntu 20.04 2 VMXNET3 NICs binded to DPDK

I start the program with this command

sudo ./dpdk-testpmd -l 0-7 -n 4 -- -i --portmask=0x1 --nb-cores=7 --txq=8 --rxq=8 --portlist=1,0

then in the test-pmd shell I start the program using start command start

then I check the statistics

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 123137     RX-missed: 1493       RX-bytes:  22110469
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 171        TX-errors: 0          TX-bytes:  30480

  Throughput (since last show)
  Rx-pps:          124          Rx-bps:       180136
  Tx-pps:            0          Tx-bps:          248
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 171        RX-missed: 0          RX-bytes:  30480
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 122461     TX-errors: 0          TX-bytes:  21994950

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:          248
  Tx-pps:          124          Tx-bps:       179336
  ############################################################################

then check the detail of the stats

###### NIC extended statistics for port 0
rx_good_packets: 123137
tx_good_packets: 171
rx_good_bytes: 22110469
tx_good_bytes: 30480
rx_missed_errors: 13180
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 123137
rx_q0_bytes: 22110469
rx_q0_errors: 0
rx_q1_packets: 0
rx_q1_bytes: 0
rx_q1_errors: 0
rx_q2_packets: 0
rx_q2_bytes: 0
rx_q2_errors: 0
rx_q3_packets: 0
rx_q3_bytes: 0
rx_q3_errors: 0
rx_q4_packets: 0
rx_q4_bytes: 0
rx_q4_errors: 0
rx_q5_packets: 0
rx_q5_bytes: 0
rx_q5_errors: 0
rx_q6_packets: 0
rx_q6_bytes: 0
rx_q6_errors: 0
rx_q7_packets: 0
rx_q7_bytes: 0
rx_q7_errors: 0
tx_q0_packets: 171
tx_q0_bytes: 30480
tx_q1_packets: 0
tx_q1_bytes: 0
tx_q2_packets: 0
tx_q2_bytes: 0
tx_q3_packets: 0
tx_q3_bytes: 0
tx_q4_packets: 0
tx_q4_bytes: 0
tx_q5_packets: 0
tx_q5_bytes: 0
tx_q6_packets: 0
tx_q6_bytes: 0
tx_q7_packets: 0
tx_q7_bytes: 0
rx_q0_drop_total: 0
rx_q0_drop_err: 0
rx_q0_drop_fcs: 0
rx_q0_rx_buf_alloc_failure: 0
rx_q1_drop_total: 0
rx_q1_drop_err: 0
rx_q1_drop_fcs: 0
rx_q1_rx_buf_alloc_failure: 0
rx_q2_drop_total: 0
rx_q2_drop_err: 0
rx_q2_drop_fcs: 0
rx_q2_rx_buf_alloc_failure: 0
rx_q3_drop_total: 0
rx_q3_drop_err: 0
rx_q3_drop_fcs: 0
rx_q3_rx_buf_alloc_failure: 0
rx_q4_drop_total: 0
rx_q4_drop_err: 0
rx_q4_drop_fcs: 0
rx_q4_rx_buf_alloc_failure: 0
rx_q5_drop_total: 0
rx_q5_drop_err: 0
rx_q5_drop_fcs: 0
rx_q5_rx_buf_alloc_failure: 0
rx_q6_drop_total: 0
rx_q6_drop_err: 0
rx_q6_drop_fcs: 0
rx_q6_rx_buf_alloc_failure: 0
rx_q7_drop_total: 0
rx_q7_drop_err: 0
rx_q7_drop_fcs: 0
rx_q7_rx_buf_alloc_failure: 0
tx_q0_drop_total: 0
tx_q0_drop_too_many_segs: 0
tx_q0_drop_tso: 0
tx_q0_tx_ring_full: 0
tx_q1_drop_total: 0
tx_q1_drop_too_many_segs: 0
tx_q1_drop_tso: 0
tx_q1_tx_ring_full: 0
tx_q2_drop_total: 0
tx_q2_drop_too_many_segs: 0
tx_q2_drop_tso: 0
tx_q2_tx_ring_full: 0
tx_q3_drop_total: 0
tx_q3_drop_too_many_segs: 0
tx_q3_drop_tso: 0
tx_q3_tx_ring_full: 0
tx_q4_drop_total: 0
tx_q4_drop_too_many_segs: 0
tx_q4_drop_tso: 0
tx_q4_tx_ring_full: 0
tx_q5_drop_total: 0
tx_q5_drop_too_many_segs: 0
tx_q5_drop_tso: 0
tx_q5_tx_ring_full: 0
tx_q6_drop_total: 0
tx_q6_drop_too_many_segs: 0
tx_q6_drop_tso: 0
tx_q6_tx_ring_full: 0
tx_q7_drop_total: 0
tx_q7_drop_too_many_segs: 0
tx_q7_drop_tso: 0
tx_q7_tx_ring_full: 0
###### NIC extended statistics for port 1
rx_good_packets: 192
tx_good_packets: 122461
rx_good_bytes: 34348
tx_good_bytes: 21994950
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 192
rx_q0_bytes: 34348
rx_q0_errors: 0
rx_q1_packets: 0
rx_q1_bytes: 0
rx_q1_errors: 0
rx_q2_packets: 0
rx_q2_bytes: 0
rx_q2_errors: 0
rx_q3_packets: 0
rx_q3_bytes: 0
rx_q3_errors: 0
rx_q4_packets: 0
rx_q4_bytes: 0
rx_q4_errors: 0
rx_q5_packets: 0
rx_q5_bytes: 0
rx_q5_errors: 0
rx_q6_packets: 0
rx_q6_bytes: 0
rx_q6_errors: 0
rx_q7_packets: 0
rx_q7_bytes: 0
rx_q7_errors: 0
tx_q0_packets: 122461
tx_q0_bytes: 21994950
tx_q1_packets: 0
tx_q1_bytes: 0
tx_q2_packets: 0
tx_q2_bytes: 0
tx_q3_packets: 0
tx_q3_bytes: 0
tx_q4_packets: 0
tx_q4_bytes: 0
tx_q5_packets: 0
tx_q5_bytes: 0
tx_q6_packets: 0
tx_q6_bytes: 0
tx_q7_packets: 0
tx_q7_bytes: 0
rx_q0_drop_total: 0
rx_q0_drop_err: 0
rx_q0_drop_fcs: 0
rx_q0_rx_buf_alloc_failure: 0
rx_q1_drop_total: 0
rx_q1_drop_err: 0
rx_q1_drop_fcs: 0
rx_q1_rx_buf_alloc_failure: 0
rx_q2_drop_total: 0
rx_q2_drop_err: 0
rx_q2_drop_fcs: 0
rx_q2_rx_buf_alloc_failure: 0
rx_q3_drop_total: 0
rx_q3_drop_err: 0
rx_q3_drop_fcs: 0
rx_q3_rx_buf_alloc_failure: 0
rx_q4_drop_total: 0
rx_q4_drop_err: 0
rx_q4_drop_fcs: 0
rx_q4_rx_buf_alloc_failure: 0
rx_q5_drop_total: 0
rx_q5_drop_err: 0
rx_q5_drop_fcs: 0
rx_q5_rx_buf_alloc_failure: 0
rx_q6_drop_total: 0
rx_q6_drop_err: 0
rx_q6_drop_fcs: 0
rx_q6_rx_buf_alloc_failure: 0
rx_q7_drop_total: 0
rx_q7_drop_err: 0
rx_q7_drop_fcs: 0
rx_q7_rx_buf_alloc_failure: 0
tx_q0_drop_total: 0
tx_q0_drop_too_many_segs: 0
tx_q0_drop_tso: 0
tx_q0_tx_ring_full: 0
tx_q1_drop_total: 0
tx_q1_drop_too_many_segs: 0
tx_q1_drop_tso: 0
tx_q1_tx_ring_full: 0
tx_q2_drop_total: 0
tx_q2_drop_too_many_segs: 0
tx_q2_drop_tso: 0
tx_q2_tx_ring_full: 0
tx_q3_drop_total: 0
tx_q3_drop_too_many_segs: 0
tx_q3_drop_tso: 0
tx_q3_tx_ring_full: 0
tx_q4_drop_total: 0
tx_q4_drop_too_many_segs: 0
tx_q4_drop_tso: 0
tx_q4_tx_ring_full: 0
tx_q5_drop_total: 0
tx_q5_drop_too_many_segs: 0
tx_q5_drop_tso: 0
tx_q5_tx_ring_full: 0
tx_q6_drop_total: 0
tx_q6_drop_too_many_segs: 0
tx_q6_drop_tso: 0
tx_q6_tx_ring_full: 0
tx_q7_drop_total: 0
tx_q7_drop_too_many_segs: 0
tx_q7_drop_tso: 0
tx_q7_tx_ring_full: 0

as you can see the porgram only use queue 0. How to balance the load between all queue

Give me a step-by-step or a command to solve this problems.

Upvotes: 0

Views: 294

Answers (1)

Nafiul Alam Fuji
Nafiul Alam Fuji

Reputation: 434

I think your correct portmask would be "0x3" which means "11"(binary) meaning to use both ports. You don't need to be upset in rx queue distribution now in test-pmd as you will need to shift to a "skeleton" project (from example) and there you will eventually configure the packet distribution among rx queues manually.

I can tell you the variable responsible for the rx queue distribution, its the value of "rte_eth_rxmode". it looks something like this in structured configuration for port:

static const struct rte_eth_conf port_conf_default = {
    .rxmode = {
        //.mq_mode = RTE_ETH_MQ_RX_NONE,
        .mq_mode = RTE_ETH_MQ_RX_RSS,
        .max_lro_pkt_size = RTE_ETHER_MAX_LEN,
        //.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
    },
    .rx_adv_conf = {
        .rss_conf = {
            .rss_key = NULL,
            .rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
        },
    },
    .txmode = {
        .mq_mode = RTE_ETH_MQ_TX_NONE,
        //.offloads = RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
    },
};

here RTE_ETH_MQ_RX_RSS means it will ditribute the packets among rx queues and RTE_ETH_MQ_RX_NONE means no distribution. I am not sure, which configuration test-pmd use by default but you can look this up, modify it and then build to get your expected behaviour, you can also ignore rx_adv_conf for now.

Upvotes: 1

Related Questions