You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 28, 2022. It is now read-only.
It might be worth investigating using OS-provided locking mechanisms for each DirectEdge and IndirectEdge structure. In user space, this would probably look like a futex. In kernel space, this would be more challenging. I think the following is worth investigating:
Incrementing preempt_count, if the symbol is available. As far as I understand it, this asks the kernel to keep the current task on the current CPU after an interrupt (presumably because it's using CPU-private data), and to potentially return to the interrupted task instead of scheduling in a new task. I think this is the case within spin_lock_bh.
Checking to see if interrupts are disabled. If so, assume we're in a spin_lock_irqsave or something similar. Here, we just fall-back on our internal spinlocks.
Can we use a struct mutex in some cases? This would be quite nice I think.
Things to watch out for:
Dealing with RCU read-side critical sections. Disabling interrupts within these is "fine", but we could hit other issues.
APIC timer interrupts.
Potential opportunities:
Have the concept of coarse-grained locks. For coarse-grained locks, do a TryAcquire, and if that fails, register a wakeup notification with the lock, and put the current CPU to sleep. The CPU holding the lock then needs to--an this is the trickiest part but here we could just look at how condition variables are implemented--send an IPI to the next sleeping processor to wake up. Finally, we'd need to have a special interrupt handler that recognizes interrupts in a short sequence of code (containing the HLT) and just "does the right thing".
The text was updated successfully, but these errors were encountered:
One benefit of coarse-grained locks here is the removal of any possibility of duplicate targets in IndirectEdge structures due to races. Another benefit would be that we could look into upgrading IndirectEdge structures into hash tables.
It might be worth investigating using OS-provided locking mechanisms for each
DirectEdge
andIndirectEdge
structure. In user space, this would probably look like a futex. In kernel space, this would be more challenging. I think the following is worth investigating:preempt_count
, if the symbol is available. As far as I understand it, this asks the kernel to keep the current task on the current CPU after an interrupt (presumably because it's using CPU-private data), and to potentially return to the interrupted task instead of scheduling in a new task. I think this is the case withinspin_lock_bh
.spin_lock_irqsave
or something similar. Here, we just fall-back on our internal spinlocks.struct mutex
in some cases? This would be quite nice I think.Things to watch out for:
Potential opportunities:
TryAcquire
, and if that fails, register a wakeup notification with the lock, and put the current CPU to sleep. The CPU holding the lock then needs to--an this is the trickiest part but here we could just look at how condition variables are implemented--send an IPI to the next sleeping processor to wake up. Finally, we'd need to have a special interrupt handler that recognizes interrupts in a short sequence of code (containing theHLT
) and just "does the right thing".The text was updated successfully, but these errors were encountered: