Replies: 1 comment 4 replies
-
The lost packets are identified by the internal sequence number in the packet. E.g. if packet 12 was the last received and the next packet is 15, then 2 packets are added to the lost count (13, 14). If packet 13 is received later (out of order) the lost count will be reduced by 1. It will help if you can add the following information:
|
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I'm running iperf on two directly connected nodes, one being server and the other being client.
In order to calculate the metrics on a per packet basis, I use tcpdump to capture all packets generated in the test. To reduce the packet size, I only captured the header info (e.g., sequence number, timestamp, packet size, etc.).
Server:
iperf3 -s -B 10.0.0.1
Client:
iperf3 -c 10.0.0.1 -B 10.0.0.2 -b 256M -u -l 256 -t 30
# a UDP testNetwork card: Intel I210 Gigabit NIC
System: Ubuntu 20.04 LTS, kernel version 5.4.0-90
In server's live report, it reports quite some packet loss (in total 15088/3749909 (0.4%)), but in tcpdump it actually manages to capture all packets without a single loss. I verified the number of packets in iperf and tcpdump are the same. Tested on iperf v3.11 and v3.9, they have similar condition.
Thus, I want to ask why would this behaviour happen? How is the algorithm of checking packet loss implemented? When comparing the packet, does iperf also verify if the content of the packet matches?
Best,
Chon Kit
Beta Was this translation helpful? Give feedback.
All reactions