You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We moved from the 288 to the 290 for the quicklook processing, as we couldn't fix the issue wit the obj_db.
When I ran apero_database.py --update to rebuild the db from the old folders (tellu and calib) the RAM usage increased
linearly as shown below. The full rebuilding took 33h, with 200,000 files in tellu/.
I got no error messages at the end of the apero_database.py and the new quicklook290 seems to work as expected. I'm catching up with the nights of current run.
The text was updated successfully, but these errors were encountered:
I mean to me if the RAM goes down without killing python its not a "leak" of memory - its just a lot being kept open (for some reason) that is then closed at some point. Though this is just semantics, I tried with v0.7 to fix this and close more things but it is a complex problem.
I aim in v0.8 to address this and hope that having all this stuff "open" for so long goes away (and will thus fix the problem with using multiprocessing - where this seems to climb way higher).
We moved from the 288 to the 290 for the quicklook processing, as we couldn't fix the issue wit the obj_db.
When I ran
apero_database.py --update
to rebuild the db from the old folders (tellu and calib) the RAM usage increasedlinearly as shown below. The full rebuilding took 33h, with 200,000 files in tellu/.
I got no error messages at the end of the apero_database.py and the new quicklook290 seems to work as expected. I'm catching up with the nights of current run.
The text was updated successfully, but these errors were encountered: