Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add DiscardWithThreshold support #5

Open
umputun opened this issue Apr 9, 2023 · 6 comments
Open

Add DiscardWithThreshold support #5

umputun opened this issue Apr 9, 2023 · 6 comments
Labels
help wanted Extra attention is needed

Comments

@umputun
Copy link
Member

umputun commented Apr 9, 2023

This proposal is related to #4, but with a slight difference. Instead of discarding calls as soon as the group size is reached, the discard process will begin only after a certain threshold is reached. In other words, if the group size is N and the threshold is M (M must be >= N), discarding will only occur when the number of "in-flight" calls exceeds M. If N == M or M == 0, this should work the same way as the standard Discard option.

This is an intriguing challenge, and not as straightforward as it seems. Please feel free to volunteer if you're interested in exploring concurrency a bit further.

@umputun umputun added the help wanted Extra attention is needed label Apr 9, 2023
@umputun umputun changed the title Add DiscardWithThresold support Add DiscardWithThreshold support Apr 9, 2023
@gurza
Copy link

gurza commented Apr 10, 2023

Hi @umputun, I can work on this

@thelex0
Copy link

thelex0 commented Apr 11, 2023

#7 obviously not final, but working. any comments?

@umputun
Copy link
Member Author

umputun commented Apr 11, 2023

I have some questions regarding this implementation:

  • what is notBlockCaller, and why is it even needed? Both groups are non-blocking unless Preemptive() option set (preLlock)
  • your implementation works like the regular Discard with the Preemptive only, but to me, it makes no sense here. Preemptive means no "hanging/idle" grouting allowed, and I don't see how this can be mixed with your new mode. I think this mode makes sense for non-preemptive only
  • you do (*g.waiters).Unlock() not in defer of the goroutine, but even before fn called. I'm not sure why

@umputun
Copy link
Member Author

umputun commented Apr 11, 2023

I've been considering the current implementation with multiple semaphores and lock/unlock pairs and wondered if it might be too complex. As a thought experiment, I propose using channels and traditional goroutines to simplify the process.

Instead of direct locks, we could have a single channel and a pool of goroutines created on-demand. The number of goroutines would equal the group's size, and the capacity of a buffered channel would help determine if discarding is needed. Setting the capacity to 0 would effectively make the process preemptive without discarding. Discarding or not will be as simple as selecting on channel with a default case or without. I'm still not sure how the current "non-preemptive" mode will be done in this system, something to think of.

Please note that I haven't tried this approach yet, but I believe it might be worth exploring to streamline our implementation.

@thelex0
Copy link

thelex0 commented Apr 12, 2023

I have some questions regarding this implementation:

  • what is notBlockCaller, and why is it even needed? Both groups are non-blocking unless Preemptive() option set (preLlock)
  • your implementation works like the regular Discard with the Preemptive only, but to me, it makes no sense here. Preemptive means no "hanging/idle" grouting allowed, and I don't see how this can be mixed with your new mode. I think this mode makes sense for non-preemptive only
  • you do (*g.waiters).Unlock() not in defer of the goroutine, but even before fn called. I'm not sure why

now I see that the new mode doesn't seem to make sense, but initial motivation was this: if preLock is true, then if there are more than N executing functions, then Go call should block. But if option WaitQueue(M) was set, shoud Go call of function N + 1 will be blocked, or it must wait in goroutine?
For instance blocking on N + 1 call (N + 1 <= N + M, not above threshold) and waiting in queue don't make sense, because Go call actually not goroutine-safe (only one caller because WaitGroup meets data race) thus WaitQueue option useless because the call to Go fn will block anyway.

nyckyta pushed a commit to nyckyta/syncs that referenced this issue Aug 25, 2024
…g channels

The change is inspired by the comment go-pkgz#5 (comment).
The solution relies on channels as the main synchronization primitive, instead of semaphore.
nyckyta pushed a commit to nyckyta/syncs that referenced this issue Aug 25, 2024
…g channels

The change is inspired by the comment go-pkgz#5 (comment).
The solution relies on channels as the main synchronization primitive, instead of semaphore.
@nyckyta
Copy link

nyckyta commented Aug 25, 2024

I've been considering the current implementation with multiple semaphores and lock/unlock pairs and wondered if it might be too complex. As a thought experiment, I propose using channels and traditional goroutines to simplify the process.

Instead of direct locks, we could have a single channel and a pool of goroutines created on-demand. The number of goroutines would equal the group's size, and the capacity of a buffered channel would help determine if discarding is needed. Setting the capacity to 0 would effectively make the process preemptive without discarding. Discarding or not will be as simple as selecting on channel with a default case or without. I'm still not sure how the current "non-preemptive" mode will be done in this system, something to think of.

Please note that I haven't tried this approach yet, but I believe it might be worth exploring to streamline our implementation.

@umputun , I found this task interesting enough to spend some time on the issue. Please, take a look on the draft, it remakes sized_group via channels/workers approach for pre-lock option in general, and adds desired functionality of 'discarding after treshold' in particular.

nyckyta pushed a commit to nyckyta/syncs that referenced this issue Aug 25, 2024
…g channels

The change is inspired by the comment go-pkgz#5 (comment).
The solution relies on channels as the main synchronization primitive, instead of semaphore.
nyckyta pushed a commit to nyckyta/syncs that referenced this issue Aug 25, 2024
…g channels

The change is inspired by the comment go-pkgz#5 (comment).
The solution relies on channels as the main synchronization primitive, instead of semaphore.
nyckyta pushed a commit to nyckyta/syncs that referenced this issue Aug 25, 2024
The change is inspired by the comment go-pkgz#5 (comment).
The solution relies on channels as the main synchronization primitive, instead of semaphore.
nyckyta pushed a commit to nyckyta/syncs that referenced this issue Sep 1, 2024
The change is inspired by the comment go-pkgz#5 (comment).
The solution relies on channels as the main synchronization primitive, instead of semaphore.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants