Reputation: 11
I have a application running with 2 cores on each port. If I use lid type as RX_TYPE on both the ports and run a application it works fine by transmitting and receiving udp packets. But if I use lid type as TX_TYPE on the ports I could able to transmit packets but not able receive the packets. why ?
I am using DPDK 19.11.10
Basically I am try to calculate latency between two ports and vice-versa. I am running my application as below
./send_udp -c 0xff -n 4 -- -m [2:3].0,[4:5].1,
Here is my sample code
void send_udp_pkt(int portid)
{
struct rte_mbuf *pkts[DEFAULT_PKT_BURST+1] = { };/*DEFAULT_PKT_BURST is 32*/
while(1){
/*build udp pkt and trasmit*/
rte_eth_tx_burst(portid, queueid, pkts, cnt);
/*Receive udp pkt and process*/
rte_eth_rx_burst(portid, queueid, pkts, 32);
}
}
int rxtx_loop(__attribute__ ((unused)) void * arg)
{
if (lid_type == TX_TYPE){
if(portid == 0)
send_udp_pkt(portid);
else if (portid == 1)
send_udp_pkt(portid);
}
}
int main(int argc, char **argv)
{
rte_eal_init(argc, argv);
config_ports();
for (i = 0; i < RTE_MAX_LCORE; i++ ) {
if ( (i == rte_get_main_lcore()) || !rte_lcore_is_enabled(i) )
continue;
switch(get_type(i)) {/*get_type function will get lid type same as in pkt gen*/
case RX_TYPE:
case TX_TYPE:
ret = rte_eal_remote_launch(rxtx_loop, NULL, i);
break;
}
}
rte_eal_mp_wait_lcore();
return 0;
}
The above code snippet will work properly if I use lid_type == RX_TYPE
in rxtx_loop
function.
What might be the issue? Can anyone help me on this?
Upvotes: 1
Views: 137