-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Use LRUCache for extendSessionOnUse
#8683
base: alpha
Are you sure you want to change the base?
Conversation
Thanks for opening this pull request! |
extendSessionOnUse
extendSessionOnUse
@Moumouls could you add your thoughts? |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## alpha #8683 +/- ##
=======================================
Coverage 93.50% 93.50%
=======================================
Files 186 186
Lines 14809 14811 +2
=======================================
+ Hits 13847 13849 +2
Misses 962 962 ☔ View full report in Codecov by Sentry. |
extendSessionOnUse
extendSessionOnUse
I will reformat the title to use the proper commit message syntax. |
extendSessionOnUse
extendSessionOnUse
Is this only a perf PR or does it actually fix a bug, or both - for the changelog entry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Bug and perf i think @mtrezza |
Then what would be a better PR title, to inform developers what the bug is. |
@dblythy Could you rewrite the PR title? If the PR includes a bug fix and a perf improvement, then the PR is a |
extendSessionOnUse
extendSessionOnUse
@Moumouls Do you think you could take a look at this Auth thing as well? We just need to resolve the conflict to merge this, but not sure how to do that. If you could make a suggestion I'd commit the change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this work in a group of server instances behind a load balancer, if the cache seems to be local?
The cache is just to reduce the method from being called constantly when the same session token is used. If there are multiple servers, the session will just be continued to extended every time - it could be optimised by using a distributed cache. |
Ok, I understand that a shared cache in this case is a separate feature that doesn't exist currently, correct?
Is the session expiration date also in the cache? Would it be possible that a session has expired since getting into the cache and is still in the cache? |
Correct
The |
How do we get to the 500ms? Why not 100ms or 10s? |
It's an arbitrary number that is currently used - it just represents the window in which using the same token won't be extended (it will at the start of the 500ms, but not again until 500ms after). Would you suggest higher? |
Just thinking out loud: could there happen a situation where long burst of repeating requests within that window would use the token from LRUCache so many times that they would essentially prevent it from auto extending and make it expire? Thinking LRUCache timeout of 5 seconds and token validity say 60 seconds? |
No, I don't think so, the mechanism is a The only changes to the current implementation in this PR is:
|
Not sure I follow, @dblythy you answered with 500ms to @mman's example of 5s cache TTL. What if the cache TTL was 60 seconds, and the token TTL was 10 seconds? I'm trying to figure out where the boundaries are for the cache TTL, based on a token TTL that the developer can set freely in Parse Server options, lowest value being 1s I believe. |
The LRU cache here is a fixed, un-configurable time of 500ms. I think the 5s was just presented as a potential scenario, which wouldn't make a difference anyway as again, the mechanism is a throttle (it will trigger once at the start of every In any case, I'm not sure configuration of the ttl value is within the scope, the current implementation is fixed at 500ms. |
Yes, it was a theoretical number to see if we can come up with a scenario that it would break the auto-extend-algorithm. I think we all agree that we want to:
Point 3 being the most important, because this is how Parse Server worked for a long long time, where user would be kicked out after months of actively using the app... and only workaround was to set session length to nonsense number like 10 years... which is a bad practice... |
I would make the quick point that in order to properly implement all 3 points, we would need to transitions to JWTs with refresh tokens. This current mechanism is a convenience method but the long term solution is JWTs. |
On the other hand, transitioning to JWT will make everything more complex... the current system is easy to understand and if we get this one right, I think it will work flawlessly... |
I agree that the scope here is just the fix as currently presented. I'm trying to understand what constraints the TTL value has, and based on that determine a value that makes most sense. What would happen if the cache TTL was 60 seconds and the token TTL was 10 seconds? Just to understand whether @mman's concern is valid. |
It would
No, the only way the concern is valid is if the session length is less than the throttle ttl (which is why I set it to 500ms) |
Ok, so we have identified the constraint that the cache TTL must be greater than the token TTL. Given that the cache TTL is not configurable, and if the minimum configurable token TTL is 1s, the cache TTL must be < 1s. To avoid race conditions, we'd need to take the cache lookup time into account, which is likely <1ms for a local cache. Taking also other code execution times into account, it should be enough to reserve 10ms in total, but let's make this super safe with 100ms. So the cache TTL could be up to 900ms.
|
Pull Request
Issue
Currently,
extendSessionOnUse
functions as a debounce, and does not clear thethrottle
store.Closes: #8682
Approach
Tasks