-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detect unsynchronized TSCs across cores and fallback to the OS time source? #111
Comments
👋🏻 It would likely be fine to do this as part of the calibration phase as you suggest. An easier approach may simply be to copy what I trust the kernel to do a better job of that than trying to do it in userland. This would only cover Linux x86-64 platforms, but that strikes me as acceptable. If you're running a hacked together Ryzen laptop w/ macOS... I'd say you're on your own for support. 😅 |
Thank you for your prompt reply. OK. I will try to implement this in the calibration phase, and open a PR.
The user reported the problem on Linux x86_64, so it should be fine. I wonder what would happen on Windows, but I will never know since I do not have such a Ryzen laptop to test. So, I will just leave it at that for now. |
Just an update, since I realized I gave some bad advice before: we wouldn't want to do this in calibration, but in the calls we make to see if the TSC is available. By the time we're in calibration, we've decided to use the TSC, so it has to come earlier than that. |
As we discovered in #61, some machines have TSCs that are not synchronized across cores. On such a machine,
quanta::Instant
will not return monotonically increasing time when queried from different cores. A users of my library, who uses a Thinkpad with Ryzen mobile CPU, seems to have this problem:moka-rs/moka#472 (comment)
I added a workaround by doing a saturating subtraction of
quanta::Instant
instead of checked subtraction, so it will not returnNone
when a time warp happens. But this is not a solution, as it only hides the problem.I did some search and found this: https://bugzilla.kernel.org/show_bug.cgi?id=202525#c2
So I am wondering if
quanta
can detect this situation and fallback to the OS time source. Maybe detecting it in the calibration phase? I am not sure how to do this, but the only way I can think of is to read TSCs of different cores and see if the differences between the values are within some threshold.Maybe it is something like this?
std::sync::Barrier
to synchronize the threads.core_affinity
crate.But I have got questions like: what would be the good threshold? (Given the above -1600ms off example, maybe a few milliseconds would be large enough? Maybe this is application dependent?)
A completely different solution for my library would be to use the OS time source (e.g.
std::time::Instant
) where the time accuracy is important, and use TSC (quanta::Instant
) in other places.The text was updated successfully, but these errors were encountered: