falsePockets
falsePockets

Reputation: 4293

Unable to connect testpmd to OVS+DPDK

Summary

I'm trying to use testpmd as a sink of traffic from a physical NIC, through OVS with DPDK.

When I run testpmd, it fails. The error message is very brief, so I have no idea what's wrong.

How can I get testpmd to connect to a virtual port in OVS with DPDK ?

Steps

I'm mostly following these Mellanox instructions.

# step 5 - "Specify initial Open vSwitch (OVS) database to use"
export PATH=$PATH:/usr/local/share/openvswitch/scripts
export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach

# step 6 - "Configure OVS to support DPDK ports"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

# step 7 - "Start OVS-DPDK service"
ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start # what does this do? I forget

# step 8 - "Configure the source code analyzer (PMD) to work with 2G hugespages and NUMA node0"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,2048" # 2048 = 2GB


# step 9 - "Set core mask to enable several PMDs"
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xFF0 # cores 4-11, 4 per NUMA node

# core masks are one's hot. LSB is core 0
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x8 # core 3

# step 10 - there is no step 10 in the doc linked above

# step 11 - Create an OVS bridge
BRIDGE="br0"
ovs-vsctl add-br $BRIDGE -- set bridge br0 datapath_type=netdev

Then for the OVS elements I'm trying to follow these steps

# add physical NICs to bridge, must be named dpdk(\d+)
sudo ovs-vsctl add-port $BRIDGE dpdk0 \
   -- set Interface dpdk0 type=dpdk \
   options:dpdk-devargs=0000:5e:00.0 ofport_request=1
sudo ovs-vsctl add-port $BRIDGE dpdk1 \
   -- set Interface dpdk1 type=dpdk \
   options:dpdk-devargs=0000:5e:00.1 ofport_request=2

# add a virtual port to connect to testpmd/VM
# Not sure if I want dpdkvhostuser or dpdkvhostuserclient
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser0 \
   -- \
   set Interface dpdkvhostuser0 \
   type=dpdkvhostuser \
   options:n_rxq=2,pmd-rxq-affinity="0:4,1:6"  \
   ofport_request=3

sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser1 \
   -- \
   set Interface dpdkvhostuser1 \
   type=dpdkvhostuser \
   options:n_rxq=2,pmd-rxq-affinity="0:8,1:10"  \
   ofport_request=4

# add flows to join interfaces (based on ofport_request numbers)
sudo ovs-ofctl add-flow $BRIDGE in_port=1,action=output:3
sudo ovs-ofctl add-flow $BRIDGE in_port=3,action=output:1
sudo ovs-ofctl add-flow $BRIDGE in_port=2,action=output:4
sudo ovs-ofctl add-flow $BRIDGE in_port=4,action=output:2

Then I run testpmd

sudo -E $DPDK_DIR/x86_64-native-linuxapp-gcc/app/testpmd  \
   --vdev virtio_user0,path=/usr/local/var/run/openvswitch/dpdkvhostuser0 \
   --vdev virtio_user1,path=/usr/local/var/run/openvswitch/dpdkvhostuser1 \
   -c 0x00fff000  \
   -n 1 \
   --socket-mem=2048,2048  \
   --file-prefix=testpmd \
   --log-level=9 \
   --no-pci \
   -- \
   --port-numa-config=0,0,1,0 \
   --ring-numa-config=0,1,0,1,1,0 \
   --numa  \
   --socket-num=0 \
   --txd=512 \
   --rxd=512 \
   --mbcache=512 \
   --rxq=1 \
   --txq=1 \
   --nb-cores=4 \
   -i \
   --rss-udp \
   --auto-start

The output is:

...
EAL: lcore 18 is ready (tid=456c700;cpuset=[18])
EAL: lcore 21 is ready (tid=2d69700;cpuset=[21])
Interactive-mode selected
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=327680, size=2176, socket=0
USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=327680, size=2176, socket=1
Configuring Port 0 (socket 0)
Fail to configure port 0
EAL: Error - exiting with code: 1
  Cause: Start ports failed

The bottom of /usr/local/var/log/openvswitch/ovs-vswitchd.log is

2018-11-30T02:45:49.115Z|00026|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been added on numa node 0
2018-11-30T02:45:49.115Z|00027|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00028|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0
2018-11-30T02:45:49.115Z|00029|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/usr/local/var/run/openvswitch/dpdkvhostuser0'changed to 'enabled'
2018-11-30T02:45:49.115Z|00030|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00031|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2018-11-30T02:45:49.278Z|00032|dpdk|ERR|VHOST_CONFIG: recvmsg failed
2018-11-30T02:45:49.279Z|00033|dpdk|INFO|VHOST_CONFIG: vhost peer closed
2018-11-30T02:45:49.280Z|00034|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been removed

What is the cause of the failure?

Should I use a dpdkvhostuserclient instead of dpdkvhostuser ?

What else I've tried

I've tried changing my testpmd command arguments. (Docs here)

Other Info

When I run sudo ovs-vsctl show I see

d3e721eb-6aeb-44c0-9fa8-5fcf023008c5
    Bridge "br0"
        Port "dpdkvhostuser1"
            Interface "dpdkvhostuser1"
                type: dpdkvhostuser
                options: {n_rxq="2,pmd-rxq-affinity=0:8,1:10"}
        Port "dpdk1"
            Interface "dpdk1"
                type: dpdk
                options: {dpdk-devargs="0000:5e:00.1"}
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
                options: {dpdk-devargs="0000:5e:00.0"}
        Port "br0"
            Interface "br0"
                type: internal
        Port "dpdkvhostuser0"
            Interface "dpdkvhostuser0"
                type: dpdkvhostuser
                options: {n_rxq="2,pmd-rxq-affinity=0:4,1:6"}

Edit: Added contents of /usr/local/var/log/openvswitch/ovs-vswitchd.log

Upvotes: 4

Views: 1107

Answers (1)

Andriy Berestovskyy
Andriy Berestovskyy

Reputation: 8534

  1. Indeed, we need dpdkvhostuser, not client.

  2. The number of queues is mismatch in OVS options:n_rxq=2 vs testpmd --txq=1.

Upvotes: 3

Related Questions