-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable GTPu Path Monitoring #684
Enable GTPu Path Monitoring #684
Conversation
b515724
to
c175fb0
Compare
c175fb0
to
1edc84e
Compare
1edc84e
to
9b15c70
Compare
This pull request has been stale for 30 days and will be closed in 5 days. Comment to keep it open. |
@badhrinathpa any additional feedback? |
This pull request has been stale for 30 days and will be closed in 5 days. Comment to keep it open. |
@@ -342,6 +346,44 @@ func (b *bess) SummaryLatencyJitter(uc *upfCollector, ch chan<- prometheus.Metri | |||
measureIface("Core", uc.upf.coreIface) | |||
} | |||
|
|||
func (b *bess) SummaryGtpuLatency(uc *upfCollector, ch chan<- prometheus.Metric) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would the call to this function be successful if path monitoring is disabled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0e40636
to
fa19ce8
Compare
@@ -116,7 +116,8 @@ registry: [upf-epc-bess](https://hub.docker.com/r/omecproject/upf-epc-bess), | |||
* Downlink Data Notification (DDN) - notification only (no buffering) | |||
* Basic QoS support, with per-slice and per-session rate limiting | |||
* Per-flow latency and throughput metrics | |||
* DSCP marking of GTPu packets by copying the DSCP value from the inner IP packet | |||
* DSCP marking of GTPU packets by copying the DSCP value from the inner IP packet | |||
* GTPu path monitoring |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GTPu or GTPU? Let's keep it consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, change it to GTPu
.
FYI, There are a few more places where GTPU
is used but I am planning to update that along with the changes from SPGW
to UPF
in the comments of different files
The GTPu Path Monitoring works as follows:
The conf/upf.json file has an option to enable/disable this capability (
enable_gtpu_path_monitoring
), which is set tofalse
by default.When a user wants to enable
GTPu Path Monitoring
, the user can set"enable_gtpu_path_monitoring": true
and when the FAR rules are inserted in the UPF (e.g.,docker exec bess-pfcpiface pfcpiface -config conf/upf.json -simulate create
), the PFCP sends the gNB information (ip address) to the GTPu Path Monitoring module in BESS.One GTPu packet per second is sent towards each gNB and the statistics are computed per gNB (ip address)
Part of the new/modified pipeline is shown below. Additionally, the PR includes a Grafana dashboard for the users to visualize the GTPu latency (min, mean, and max). For the case of the Grafana screen capture below, there were 5 (simulated) "gNBs" connected to the UPF and that is why the figure shows results for 5 different IP addresses. To simulate the # gNBs, the UPF was directly connected to another machine that its only job is to receive GTPu Echo Requests from the UPF and to create GTPu Echo Responses for each received packet. The setup is shown below.
Note: This PR depends on BESS PR #19. That is, BESS PR 19 needs to be merged first and then this PR