DPDK TestPMD application results 0 rx packets

123 views Asked by At

I am testing DPDK TestPMD application in Avleo u200. I am executing below commands

dpdk-20.11]$ sudo ./usertools/dpdk-devbind.py -b vfio-pci 08:00.0 08:00.1

dpdk-20.11]$ sudo ./build/app/dpdk-testpmd -l 1-3 -n 4 -a 0000:08:00.0 -a 0000:08:00.1 -- --burst=256 -i --nb-cores=1  --forward-mode=io --rxd=2048 --txd=2048 --mbcache=512 --mbuf-size=4096 
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Debug dataplane logs available - lower performance
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_qdma (10ee:903f) device: 0000:08:00.0 (socket 0)
Device Type: Soft IP
IP Type: EQDMA Soft IP
Vivado Release: vivado 2020.2
PMD: qdma_get_hw_version(): QDMA RTL VERSION : RTL Base

PMD: qdma_get_hw_version(): QDMA DEVICE TYPE : Soft IP

PMD: qdma_get_hw_version(): QDMA VIVADO RELEASE ID : vivado 2020.2

PMD: qdma_identify_bars(): QDMA config bar idx :0

PMD: qdma_identify_bars(): QDMA AXI Master Lite bar idx :2

PMD: qdma_identify_bars(): QDMA AXI Bridge Master bar idx :-1

PMD: qdma_eth_dev_init(): QDMA device driver probe:
PMD: qdma_device_attributes_get(): qmax = 512, mm 1, st 1.

PMD: qdma_eth_dev_init(): PCI max bus number : 0x8
PMD: qdma_eth_dev_init(): PF function ID: 0
PMD: QDMA PMD VERSION: 2020.2.1
qdma_dev_entry_create: Created the dev entry successfully
EAL: Probe PCI driver: net_qdma (10ee:913f) device: 0000:08:00.1 (socket 0)
Device Type: Soft IP
IP Type: EQDMA Soft IP
Vivado Release: vivado 2020.2
PMD: qdma_get_hw_version(): QDMA RTL VERSION : RTL Base

PMD: qdma_get_hw_version(): QDMA DEVICE TYPE : Soft IP

PMD: qdma_get_hw_version(): QDMA VIVADO RELEASE ID : vivado 2020.2

PMD: qdma_identify_bars(): QDMA config bar idx :0

PMD: qdma_identify_bars(): QDMA AXI Master Lite bar idx :2

PMD: qdma_identify_bars(): QDMA AXI Bridge Master bar idx :-1

PMD: qdma_eth_dev_init(): QDMA device driver probe:
PMD: qdma_device_attributes_get(): qmax = 512, mm 1, st 1.

PMD: qdma_eth_dev_init(): PCI max bus number : 0x8
PMD: qdma_eth_dev_init(): PF function ID: 1
qdma_dev_entry_create: Created the dev entry successfully
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_0>: n=180224, size=4096, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
PMD: qdma_dev_configure(): Configure the qdma engines

PMD: qdma_dev_configure(): Bus: 0x0, PF-0(DEVFN) queue_base: 0

PMD: qdma_dev_tx_queue_setup(): Configuring Tx queue id:0 with 2048 desc

PMD: qdma_dev_tx_queue_setup(): Tx ring phys addr: 0x1515C6000, Tx Ring virt addr: 0x1515C6000
PMD: qdma_dev_rx_queue_setup(): Configuring Rx queue id:0

PMD: qdma_dev_start(): qdma-dev-start: Starting

PMD: qdma_dev_link_update(): Link update done

Port 0: 15:16:17:18:19:1A
Configuring Port 1 (socket 0)
PMD: qdma_dev_configure(): Configure the qdma engines

PMD: qdma_dev_configure(): Bus: 0x0, PF-1(DEVFN) queue_base: 1

PMD: qdma_dev_tx_queue_setup(): Configuring Tx queue id:0 with 2048 desc

PMD: qdma_dev_tx_queue_setup(): Tx ring phys addr: 0x1515A7000, Tx Ring virt addr: 0x1515A7000
PMD: qdma_dev_rx_queue_setup(): Configuring Rx queue id:0

PMD: qdma_dev_start(): qdma-dev-start: Starting

PMD: qdma_dev_link_update(): Link update done

Port 1: 15:16:17:18:19:1A
Checking link statuses...
PMD: qdma_dev_link_update(): Link update done

PMD: qdma_dev_link_update(): Link update done

Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
Error during enabling promiscuous mode for port 1: Operation not supported - ignore
testpmd> start tx_first
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=256
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=2048 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=2
      RX Offloads=0x0
    TX queue: 0
      TX desc=2048 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=2048 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=2
      RX Offloads=0x0
    TX queue: 0
      TX desc=2048 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 256            TX-dropped: 0             TX-total: 256
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 256            TX-dropped: 0             TX-total: 256
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 512            TX-dropped: 0             TX-total: 512
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

When I run dpdk-devbind.py command, the interface will disappear under "ip link". But its listed under dpdk-devbind.py --status

Please help me to debug why the RX packets shows 0 when the TX packets able to send.

I had tried loopback setup to perform TestPMD.

Any suggestions will be helpful. Thanks in advance

1

There are 1 answers

5
Nafiul Alam Fuji On

Try using "--nb-cores=2" atleast, as 1 core will be handling 1 core at a time. I think your rx queue wasn't configured correctly as you could see detailed logs of tx queue initialization, but not detailed about rx queue initialization.

you are using io mode with tx burst where the first port will generate a burst of packets and send it to another port. and this packets will then traverse between ports using rx and tx queues of both ports. once your rx queue get configured properly, your problem will be solved