Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"rx_descriptor_status" function not implemented in DPDK ENA pmd #166

Open
subhrch opened this issue Feb 7, 2021 · 9 comments
Open

"rx_descriptor_status" function not implemented in DPDK ENA pmd #166

subhrch opened this issue Feb 7, 2021 · 9 comments

Comments

@subhrch
Copy link

subhrch commented Feb 7, 2021

Hi,

I found that in case of DPDK ENA pmd "rx_descriptor_status" function pointer is not implemented unlike other PMDs in dpdk. We have a dpdk application which heavily rely on this function before calling any rx_burst function. We don't wait on a busy loop to continuously check on the rx_burst function and that is a crucial part of the application design. The rx packet processing only happens when "rx_descriptor_status" returns DD. We wanted to run our application in AWS EC2 with ena driver for better performance but lack of this is prohibiting from doing so. Can you please help and let me know if this functionality will be available in any near future. Is it feasible to implement this functionality and back port to older DPDK releases.

If this is not the right forum to get an answer please help me with a link where I can get some help w.r.t this problem in ENA DPDK pmd.

thanks
Subhransu

@I-gor-C
Copy link
Contributor

I-gor-C commented Feb 7, 2021

Hi Subhransu,

Thanks for reaching out. I'll check the feasibility of your request and get back to you.

@I-gor-C
Copy link
Contributor

I-gor-C commented Feb 8, 2021

Appears to be feasible. I'll update on the progress.

@subhrch
Copy link
Author

subhrch commented Feb 12, 2021

That is great. Thanks @I-gor-C . I would be waiting to try it.

@subhrch
Copy link
Author

subhrch commented Feb 18, 2021

Hi @I-gor-C . How are you ?
Just wanted to check if there will be any tentative time when I can expect the changes for try out. Also what will be the plan for the final change being available in DPDK LTS releases ? We are currently planning to migrate to DPDK 20.11 LTS.

@I-gor-C
Copy link
Contributor

I-gor-C commented Feb 18, 2021

Hi @subhrch
I currently don't have a timeline but we will try to make it available for the next DPDK release - I'll update if there are changes on that. I don't see a problem to backport it to 20.11 once the feature is ready.

@I-gor-C
Copy link
Contributor

I-gor-C commented Apr 30, 2021

Hi @subhrch
This feature won't make it to the v21.05 release.
We might be able to provide you with a prerelease version if it's required urgently.
Please contact me directly at [email protected] if you're interested.

@gshanemiller
Copy link

gshanemiller commented Mar 15, 2022

OP is referring to:

In this comment I am referring to the TX equivalent immediately below in link.

Please note as of this writing rte_eth_tx_descriptor_status() returns -ENOTSUP on DPDK head commit (917229c24e871bbc3225a0227eb3f0faaa7aaa69 @ Fri Mar 4 16:24:24 2022 +0000). This code comes after v22.03-rc2. And I'm using Amazon drivers as of abbfc2eb4a836d075ff2867f9cdd81cd132955ce @ Mon Jan 24 16:03:29 2022 +0200

Unless the driver team can recommend an alternative it's imperative both the RX and TX APIs are implemented. For the TX side consider the following inevitable use-case:

  1. A TXQ lcore has rte_eth_tx_burst all the packets it wants to send
  2. The TXQ lcore, being done, exits its thread returning 0
  3. Back on the main lcore the process blocks until rte_eal_mp_wait_lcore is done
  4. The process collects stats including especially the number of packets sent. Naively we'd expect the number of out packets reported by either rte_eth_stats_get or rte_eth_xstats_get would equal the number of successfully enqueued packets reported by rte_eth_tx_burst. In fact this will not happen in general.

Without the ability to confirm all the descriptors in the TXQs rings are DONE via rte_eth_tx_descriptor_status before (2) returns, the process cannot exit clean knowing everything it intended to send was in fact sent. Put another way, there will in general be work in-queue on the HW still in-progress.

As you can see in this output, which runs without this check-for-done-in-HW-work the number of packets sent is slightly less than what was enqueued:

lcoreId: 00, txqIndex: 00, packetsQueued: 15000   <---- sent 15000 pkts on txq0
lcoreId: 01, txqIndex: 01, packetsQueued: 15000.   <---- sent 15000 pkts on txq1

in  packets : 0
out packets : 29424                                        <---- reported by rte_eth_stats_get. Should be 30000
in  bytes   : 0
out bytes   : 1588896
missed pkts : 0
in err pkts : 0
out err pkts: 0
rx allc errs: 0
rx_good_packets                 : 0
tx_good_packets                 : 29990                <--- reported by rte_eth_xstats_get a few us later. should be 30000
rx_good_bytes                   : 0
tx_good_bytes                   : 1619460
rx_missed_errors                : 0
rx_errors                       : 0
tx_errors                       : 0
rx_mbuf_allocation_errors       : 0
tx_q0_packets                   : 15000                    <--- matches what was queued
tx_q0_bytes                     : 810000
tx_q1_packets                   : 15000                    <---- matches what was queued
tx_q1_bytes                     : 810000
wd_expired                      : 0
dev_start                       : 1
dev_stop                        : 0
tx_drops                        : 0
bw_in_allowance_exceeded        : 0
bw_out_allowance_exceeded       : 0
pps_allowance_exceeded          : 0
conntrack_allowance_exceeded    : 0
linklocal_allowance_exceeded    : 0
tx_q0_cnt                       : 15000
tx_q1_cnt                       : 15000
tx_q0_bytes                     : 810000
tx_q1_bytes                     : 810000
tx_q0_prepare_ctx_err           : 0
tx_q1_prepare_ctx_err           : 0
tx_q0_tx_poll                   : 49693
tx_q1_tx_poll                   : 53174
tx_q0_doorbells                 : 2288
tx_q1_doorbells                 : 2291
tx_q0_bad_req_id                : 0
tx_q1_bad_req_id                : 0
tx_q0_available_desc            : 7
tx_q1_available_desc            : 7
tx_q0_missed_tx                 : 0
tx_q1_missed_tx                 : 0
ena_tx_queue_release(): Tx queue 0:0 released
ena_tx_queue_release(): Tx queue 0:1 released

@shaibran
Copy link
Contributor

@rodgarrison Hey, we are targeting the TX/RX descriptor probing to our next PMD 2.7.0.
It is still under work, but I assume we can provide a pre-release patch for testing within the next 2 weeks.

@shaibran
Copy link
Contributor

shaibran commented Jun 6, 2022

update, the feature is postponed to next PMD release

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants