You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment, the memory usage is quite high when we have to load the entire granule (i.e every beam in the granule) into memory. Instead, we could consider saving each beam to the parquet file with update - that way memory usage should be 1/8th what it is now. Not essential immediately but I can see this being useful for working with the direct access. Imagine being able to use very cheap compute with minimal memory to transfer the hdf files to parquet in another s3 bucket - could be a very low cost way to enhance GEDI access...
The text was updated successfully, but these errors were encountered:
At the moment, the memory usage is quite high when we have to load the entire granule (i.e every beam in the granule) into memory. Instead, we could consider saving each beam to the parquet file with update - that way memory usage should be 1/8th what it is now. Not essential immediately but I can see this being useful for working with the direct access. Imagine being able to use very cheap compute with minimal memory to transfer the hdf files to parquet in another s3 bucket - could be a very low cost way to enhance GEDI access...
The text was updated successfully, but these errors were encountered: