You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, remote ID should capture all IDs associated with a given study prior to any attempts at loading. This works fine, but it does introduce some amount of delay when the resource count is substantial (such as 200K observations as we have in DS-Connect right now).
Filtering which resource types we cache based on the resourceTypes we are about to load is nice, except that we may need some additional resourceTypes IDd in order to build out the appropriate references.
To facilitate this, we could have a way to recognize when we attempt to get an id for a resourceType that we haven't pulled IDs for and then pull those IDs and cache them. However, if we do this, we'll need to block the entire caching mechanism during this time, or at least have some way to block whenever subsequent attempts to query the cache for the resourceType being populated.
The text was updated successfully, but these errors were encountered:
Right now, remote ID should capture all IDs associated with a given study prior to any attempts at loading. This works fine, but it does introduce some amount of delay when the resource count is substantial (such as 200K observations as we have in DS-Connect right now).
Filtering which resource types we cache based on the resourceTypes we are about to load is nice, except that we may need some additional resourceTypes IDd in order to build out the appropriate references.
To facilitate this, we could have a way to recognize when we attempt to get an id for a resourceType that we haven't pulled IDs for and then pull those IDs and cache them. However, if we do this, we'll need to block the entire caching mechanism during this time, or at least have some way to block whenever subsequent attempts to query the cache for the resourceType being populated.
The text was updated successfully, but these errors were encountered: