Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Use LRUCache for extendSessionOnUse #8683

Open
wants to merge 12 commits into
base: alpha
Choose a base branch
from

Conversation

dblythy
Copy link
Member

@dblythy dblythy commented Jul 6, 2023

Pull Request

Issue

Currently, extendSessionOnUse functions as a debounce, and does not clear the throttle store.

Closes: #8682

Approach

  • Change timeout logic (debounce) to a throttle, where the first session call is executed, and then subsequent calls do not extend
  • Sessions are no longer looked up and extended if they have been pulled from the session cache, as the only way they get into the session cache in the first place, is if they have been used recently (I believe it's safe to assume they have been extended already)

Tasks

  • Add new Parse Error codes to Parse JS SDK

@parse-github-assistant
Copy link

Thanks for opening this pull request!

@dblythy dblythy changed the title Correctly throttle extendSessionOnUse fix: Correctly throttle extendSessionOnUse Jul 6, 2023
@dblythy
Copy link
Member Author

dblythy commented Jul 6, 2023

@Moumouls could you add your thoughts?

@codecov
Copy link

codecov bot commented Jul 6, 2023

Codecov Report

Attention: Patch coverage is 87.50000% with 2 lines in your changes missing coverage. Please review.

Project coverage is 93.50%. Comparing base (a68f71b) to head (2aeaf9a).

Files with missing lines Patch % Lines
src/Auth.js 87.50% 2 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##            alpha    #8683   +/-   ##
=======================================
  Coverage   93.50%   93.50%           
=======================================
  Files         186      186           
  Lines       14809    14811    +2     
=======================================
+ Hits        13847    13849    +2     
  Misses        962      962           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

src/Auth.js Outdated Show resolved Hide resolved
@dblythy dblythy requested a review from Moumouls July 6, 2023 13:12
@dblythy dblythy changed the title fix: Correctly throttle extendSessionOnUse perf: use LRUCache for extendSessionOnUse Jul 7, 2023
@parse-github-assistant
Copy link

I will reformat the title to use the proper commit message syntax.

@parse-github-assistant parse-github-assistant bot changed the title perf: use LRUCache for extendSessionOnUse perf: Use LRUCache for extendSessionOnUse Jul 7, 2023
@mtrezza
Copy link
Member

mtrezza commented Jul 7, 2023

Is this only a perf PR or does it actually fix a bug, or both - for the changelog entry.

Copy link
Member

@Moumouls Moumouls left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Moumouls
Copy link
Member

Moumouls commented Jul 7, 2023

Bug and perf i think @mtrezza

@mtrezza
Copy link
Member

mtrezza commented Jul 7, 2023

Then what would be a better PR title, to inform developers what the bug is.

@mtrezza
Copy link
Member

mtrezza commented Jul 14, 2023

@dblythy Could you rewrite the PR title? If the PR includes a bug fix and a perf improvement, then the PR is a fix since that is higher ranked and the perf improvement could be added as additional note.

@dblythy dblythy changed the title perf: Use LRUCache for extendSessionOnUse fix: Use LRUCache for extendSessionOnUse Jul 14, 2023
@mtrezza
Copy link
Member

mtrezza commented Mar 24, 2024

@Moumouls Do you think you could take a look at this Auth thing as well? We just need to resolve the conflict to merge this, but not sure how to do that. If you could make a suggestion I'd commit the change.

@dblythy dblythy requested a review from a team January 28, 2025 09:39
Copy link
Member

@mtrezza mtrezza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this work in a group of server instances behind a load balancer, if the cache seems to be local?

@dblythy
Copy link
Member Author

dblythy commented Jan 29, 2025

The cache is just to reduce the method from being called constantly when the same session token is used. If there are multiple servers, the session will just be continued to extended every time - it could be optimised by using a distributed cache.

@mtrezza
Copy link
Member

mtrezza commented Jan 29, 2025

Ok, I understand that a shared cache in this case is a separate feature that doesn't exist currently, correct?

Sessions are no longer looked up and extended if they have been pulled from the session cache, as the only way they get into the session cache in the first place, is if they have been used recently (I believe it's safe to assume they have been extended already)

Is the session expiration date also in the cache? Would it be possible that a session has expired since getting into the cache and is still in the cache?

@dblythy
Copy link
Member Author

dblythy commented Jan 29, 2025

Ok, I understand that a shared cache in this case is a separate feature that doesn't exist currently, correct?

Correct

Is the session expiration date also in the cache

The ttl on the cache is 500ms. It's just to stop bursty requests from continuously extending the session

@mtrezza
Copy link
Member

mtrezza commented Jan 29, 2025

How do we get to the 500ms? Why not 100ms or 10s?

@dblythy
Copy link
Member Author

dblythy commented Jan 29, 2025

It's an arbitrary number that is currently used - it just represents the window in which using the same token won't be extended (it will at the start of the 500ms, but not again until 500ms after). Would you suggest higher?

@mman
Copy link
Contributor

mman commented Jan 29, 2025

Just thinking out loud: could there happen a situation where long burst of repeating requests within that window would use the token from LRUCache so many times that they would essentially prevent it from auto extending and make it expire? Thinking LRUCache timeout of 5 seconds and token validity say 60 seconds?

@dblythy
Copy link
Member Author

dblythy commented Jan 29, 2025

No, I don't think so, the mechanism is a throttle so it should throttle it to one request every 500ms. Previously with the mechanism at a debouncer, this could occur.

The only changes to the current implementation in this PR is:

  • Use a LRU cache instead of a POJO
  • Change the debouncer to a throttle

@mtrezza
Copy link
Member

mtrezza commented Jan 29, 2025

Not sure I follow, @dblythy you answered with 500ms to @mman's example of 5s cache TTL. What if the cache TTL was 60 seconds, and the token TTL was 10 seconds? I'm trying to figure out where the boundaries are for the cache TTL, based on a token TTL that the developer can set freely in Parse Server options, lowest value being 1s I believe.

@dblythy
Copy link
Member Author

dblythy commented Jan 29, 2025

The LRU cache here is a fixed, un-configurable time of 500ms. I think the 5s was just presented as a potential scenario, which wouldn't make a difference anyway as again, the mechanism is a throttle (it will trigger once at the start of every ttl)

In any case, I'm not sure configuration of the ttl value is within the scope, the current implementation is fixed at 500ms.

@mman
Copy link
Contributor

mman commented Jan 29, 2025

Yes, it was a theoretical number to see if we can come up with a scenario that it would break the auto-extend-algorithm. I think we all agree that we want to:

  1. serve requests fast without necessarily checking the sessionToken (requiring a read) on every use
  2. extend the sessionToken validity (thus incurring a write) if we meet the criteria, and only once.
  3. make the algorithm work reliably so when users use the valid sessionToken, it always gets extended and we do not end up in situation where it expires despite being used...

Point 3 being the most important, because this is how Parse Server worked for a long long time, where user would be kicked out after months of actively using the app... and only workaround was to set session length to nonsense number like 10 years... which is a bad practice...

@dblythy
Copy link
Member Author

dblythy commented Jan 29, 2025

I would make the quick point that in order to properly implement all 3 points, we would need to transitions to JWTs with refresh tokens. This current mechanism is a convenience method but the long term solution is JWTs.

@mman
Copy link
Contributor

mman commented Jan 29, 2025

I would make the quick point that in order to properly implement all 3 points, we would need to transitions to JWTs with refresh tokens. This current mechanism is a convenience method but the long term solution is JWTs.

On the other hand, transitioning to JWT will make everything more complex... the current system is easy to understand and if we get this one right, I think it will work flawlessly...

@mtrezza
Copy link
Member

mtrezza commented Jan 29, 2025

I agree that the scope here is just the fix as currently presented. I'm trying to understand what constraints the TTL value has, and based on that determine a value that makes most sense.

What would happen if the cache TTL was 60 seconds and the token TTL was 10 seconds? Just to understand whether @mman's concern is valid.

@dblythy
Copy link
Member Author

dblythy commented Jan 29, 2025

It would

  • add 10 seconds to the token expiry the first time it is used
  • not extend the token for the next 60 seconds (meaning it would expire)
  • meaning that the token would expire

No, the only way the concern is valid is if the session length is less than the throttle ttl (which is why I set it to 500ms)

@mtrezza
Copy link
Member

mtrezza commented Jan 29, 2025

Ok, so we have identified the constraint that the cache TTL must be greater than the token TTL. Given that the cache TTL is not configurable, and if the minimum configurable token TTL is 1s, the cache TTL must be < 1s. To avoid race conditions, we'd need to take the cache lookup time into account, which is likely <1ms for a local cache. Taking also other code execution times into account, it should be enough to reserve 10ms in total, but let's make this super safe with 100ms.

So the cache TTL could be up to 900ms.

  • What could be the difference between 500ms and 900ms cache TTL in practice?

  • Would it make sense to set the cache TTL as a function of the token TTL, which we know from the Parse Server config? In theory, someone could issue tokens with different TTLs, so maybe that's not feasible?

  • Another aspect: is it currently possible to configure a token TTL of 0s, and if yes, do we open any vulnerabilities by caching it for 1s?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

extendSessionOnUse does not clear memory
4 participants