Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-tier network latency test plan #16760

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

sm-xu
Copy link
Contributor

@sm-xu sm-xu commented Feb 3, 2025

Description of PR

Summary:
Fixes # (issue)

Type of change

  • Bug fix
  • Testbed and Framework(new/improvement)
  • New Test case
    • Skipped for non-supported platforms
  • Test case improvement

Back port request

  • 202012
  • 202205
  • 202305
  • 202311
  • 202405
  • 202411

Approach

What is the motivation for this PR?

How did you do it?

How did you verify/test it?

Any platform specific information?

Supported testbed topology if it's a new test case?

Documentation

@sm-xu sm-xu requested review from wangxin and yxieca as code owners February 3, 2025 01:17
@mssonicbld
Copy link
Collaborator

/azp run

Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@mssonicbld
Copy link
Collaborator

/azp run

Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

@mssonicbld
Copy link
Collaborator

/azp run

Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

@sm-xu sm-xu changed the title 2-tier network latency test plan Multi-tier network latency test plan Feb 6, 2025

## Background

In Broadcom architecture, the Memory Management Unit (MMU) handles packet buffering and traffic management within the device. Each MMU consists of two Ingress Traffic Managers (ITMs). The first and last quarters of the ports reside in one ITM, while the second and third quarters reside in the other ITM. We will test the latency of traffic managed within a single ITM as well as traffic that traverses between the two ITMs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoid naming vendor's name here. the test is vendor agnostic.


In Broadcom architecture, the Memory Management Unit (MMU) handles packet buffering and traffic management within the device. Each MMU consists of two Ingress Traffic Managers (ITMs). The first and last quarters of the ports reside in one ITM, while the second and third quarters reside in the other ITM. We will test the latency of traffic managed within a single ITM as well as traffic that traverses between the two ITMs.

Cut-through packet forwarding mode is unsupported in SONiC. Store-and-forward is supported, but it can only be set in SAI. Config_db support is not in place. So no packet forwarding requirement in latency test.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cut-through packet forwarding mode is unsupported in SONiC. Store-and-forward is supported, but it can only be set in SAI.

this is not true. we can just say, currently, we are testing with store-and-forward mode only.


5. Vary traffic rates from 50% to 100% of the line rate. Observe how latency changes in relation to packet loss. Note: Latency measurements may be skewed due to packet loss, as lost packets are counted as having infinite latency. This issue should be addressed to ensure accurate results.

6. Categorize latency results into multiple bins based on time intervals. Count the number of packets within the following latency ranges: under 1500ns, between 1500ns and 1600ns, between 1600ns and 1700ns, ... ..., between 3900ns and 4000ns, above 4000ns. Analyze the distribution to better understand latency characteristics.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is test metrics.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make a test metrics section and describes all the metrics requirements here.


4. Repeat the test with packet sizes ranging from 86 bytes to 8096 bytes to analyze how packet size impacts latency.

5. Vary traffic rates from 50% to 100% of the line rate. Observe how latency changes in relation to packet loss. Note: Latency measurements may be skewed due to packet loss, as lost packets are counted as having infinite latency. This issue should be addressed to ensure accurate results.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to be specific on how to vary the traffic rates.

@mssonicbld
Copy link
Collaborator

/azp run

Copy link

Azure Pipelines could not run because the pipeline triggers exclude this branch/path.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants