You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First found this using uproot, but i believe the issue comes from ffspec-xrootd. I see an issue when reading several files successively from the dcache storage at LMU.
This example hangs after having processed the 50th file which i believe is the current setting for "max movers" on that dcache storage
I'm not a dcache expert, but i believe the settings on the maximum number of "movers" lead to requests being queued when one opens more connections than the maximum amount. I suspect some connections are not closed and it has something to do with the file handle cache introduced in #54 since calling invalidate_cache() seems to make the problem disappear - the following does not cause any issues:
indeed that seems to work fine (for cache sizes up to around 30) and file_handle_cache_size seems to be also passed through correctly when using uproot.open. Thanks!
First found this using uproot, but i believe the issue comes from ffspec-xrootd. I see an issue when reading several files successively from the dcache storage at LMU.
This example hangs after having processed the 50th file which i believe is the current setting for "max movers" on that dcache storage
I'm not a dcache expert, but i believe the settings on the maximum number of "movers" lead to requests being queued when one opens more connections than the maximum amount. I suspect some connections are not closed and it has something to do with the file handle cache introduced in #54 since calling
invalidate_cache()
seems to make the problem disappear - the following does not cause any issues:Also, no issues if i use
f.read(...)
instead off.fs.cat_file(...)
Version used:
The text was updated successfully, but these errors were encountered: