Esaïe Njongssi
Esaïe Njongssi

Reputation: 103

Issues with VPP+DPDK Configuration: unable to ping VPP+DPDK interface using same vpp+dpdk interface or ifconfig interface

I am new to using VPP+DPDK and its applications, and I am seeking your assistance based on some clues to resolve these issues. Here are the situations I am encountering:

Case 1: I am trying to ping between two interfaces on the same VirtualBox VM, one using VPP-DPDK and the other using ifconfig. Here are the configurations I have made:

Network devices using DPDK-compatible driver:

0000:00:08.0 '82540EM Gigabit Ethernet Controller 100e' drv=vfio-pci unused=e1000,uio_pci_generic
0000:00:09.0 '82540EM Gigabit Ethernet Controller 100e' drv=vfio-pci unused=e1000,uio_pci_generic

Network devices using kernel driver:

0000:00:03.0 '82540EM Gigabit Ethernet Controller 100e' if=enp0s3 drv=e1000 unused=vfio-pci,uio_pci_generic *Active*

Startup Configuration:

osboxes@osboxes:~$ cat startup1.conf
unix {
    nodaemon
    log /var/log/vpp/vpp.log
    cli-listen /run/vpp/cli-vpp1.sock
}
api-segment { prefix vpp1 }
dpdk {
    dev 0000:00:08.0
    dev 0000:00:09.0
    socket-mem 1024
}

IP Configuration:

osboxes@osboxes:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:47:b5:56 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.129/24 brd 192.168.1.255 scope global dynamic enp0s3
       valid_lft 86040sec preferred_lft 86040sec
    inet6 fd02:707a:a9b:c000:a00:27ff:fe47:b556/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 6958sec preferred_lft 3358sec
    inet6 fe80::a00:27ff:fe47:b556/64 scope link
       valid_lft forever preferred_lft forever
osboxes@osboxes:~$

VPP Configuration:

vpp# set interface state GigabitEthernet0/8/0 up
vpp# set interface ip address GigabitEthernet0/8/0 172.28.128.5/24

root@osboxes:/home/osboxes# vppctl sh int address GigabitEthernet0/8/0
GigabitEthernet0/8/0 (up):
  L3 172.28.128.5/24

root@osboxes:/home/osboxes# vppctl sh hardware GigabitEthernet0/8/0
              Name                Idx   Link  Hardware
GigabitEthernet0/8/0               1     up   GigabitEthernet0/8/0
  Link speed: 1 Gbps
  RX Queues:
    queue thread         mode
    0     main (0)       polling
  TX Queues:
    TX Hash: [name: hash-eth-l34 priority: 50 description: Hash ethernet L34 headers]
    queue shared thread(s)
    0     no     0
  Ethernet address 08:00:27:68:9f:1e
  Intel 82540EM (e1000)
    carrier up full duplex max-frame-size 9022
    flags: admin-up maybe-multiseg tx-offload intel-phdr-cksum rx-ip4-cksum int-supported
    rx: queues 1 (max 2), desc 1024 (min 32 max 4096 align 8)
    tx: queues 1 (max 2), desc 1024 (min 32 max 4096 align 8)
    pci: device 8086:100e subsystem 8086:001e address 0000:00:08.00 numa 0
    max rx packet len: 16128
    promiscuous: unicast off all-multicast on
    vlan offload: strip off filter off qinq off
    rx offload avail:  vlan-strip ipv4-cksum udp-cksum tcp-cksum vlan-filter
                       scatter keep-crc
    rx offload active: ipv4-cksum scatter
    tx offload avail:  vlan-insert ipv4-cksum udp-cksum tcp-cksum multi-segs
    tx offload active: ipv4-cksum udp-cksum tcp-cksum multi-segs
    rss avail:         none
    rss active:        none
    tx burst function: (not available)
    rx burst function: (not available)

Ping Results:

root@osboxes:/home/osboxes# ping 172.28.128.5
PING 172.28.128.5 (172.28.128.5) 56(84) bytes of data.
^C
--- 172.28.128.5 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3501ms

root@osboxes:/home/osboxes# ping 172.28.128.5 -I enp0s3
PING 172.28.128.5 (172.28.128.5) from 192.168.1.129 enp0s3: 56(84) bytes of data.
^C
--- 172.28.128.5 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2439ms

root@osboxes:/home/osboxes# vppctl sh error
   Count                  Node                              Reason               Severity
root@osboxes:/home/osboxes#

Ping from VPP to ifconfig Interface:

root@osboxes:/home/osboxes# vppctl ping 192.168.1.129
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface
Failed: no egress interface
^C

Case 2: I am trying to ping between two VPP-DPDK interfaces on two different VirtualBox VMs. The configuration for the second VM is similar to the first one:

Network devices using DPDK-compatible driver:

0000:00:08.0 '82540EM Gigabit Ethernet Controller 100e' drv=vfio-pci unused=e1000,uio_pci_generic
0000:00:09.0 '82540EM Gigabit Ethernet Controller 100e' drv=vfio-pci unused=e1000,uio_pci_generic

Network devices using kernel driver:

0000:00:03.0 '82540EM Gigabit Ethernet Controller 100e' if=enp0s3 drv=e1000 unused=vfio-pci,uio_pci_generic *Active*

VPP Configuration:

root@osboxes:/home/osboxes# vppctl set interface state GigabitEthernet0/8/0 up
root@osboxes:/home/osboxes# vppctl set interface ip address GigabitEthernet0/8/0 172.28.128.10/24
root@osboxes:/home/osboxes# vppctl sh int GigabitEthernet0/8/0
              Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     Counter          Count
GigabitEthernet0/8/0              1      up          9000/0/0/0

Ping Results:

root@osboxes:/home/osboxes# ping 172.28.128.10
PING 172.28.128.10 (172.28.128.10) 56(84) bytes of data.
^C
--- 172.28.128.5 ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3501ms

root@osboxes:/home/osboxes# ping 172.28.128.5 -I enp0s3
PING 172.28.128.10 (172.28.128.10) from 192.168.1.127 enp0s3: 56(84) bytes of data.
^C
--- --- 172.28.128.10 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2439ms

The hypotheses are that:

Maybe I created the VMs incorrectly. The second is that perhaps I should have used the interface with TAP and looked for a good configuration. But so far, I haven’t found a configuration that could solve my problem.

Upvotes: 1

Views: 158

Answers (0)

Related Questions