-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: calculate only missing contour levels #3224
Conversation
Deploying nmrium with Cloudflare Pages
|
c36fae1
to
4cc2778
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will try to point the problems when I have more time but I suggest you spend time learning React on your side because the things you did with the hooks are very wrong.
it were very expected, because I have not idea of react, but I would like show to luc, the behavior of contour plot cache while we wait for hamed, The time of contours generation reduce a lot when all the levels are cached. |
Are you planning to refactor the code? If you're busy with something else, I can take care of the refactoring and fix his code. |
I'm not planning to refactor it, but I would like to review it when it's ready |
I created a specialized context to handle contour calculations per nucleus nmrium/src/component/2d/ft/ContoursContext.tsx Lines 38 to 53 in f2d29f9
This requires additional work:
nmrium/src/component/2d/ft/ContoursContext.tsx Lines 23 to 30 in f2d29f9
We have a custom hook, useContours, which returns an object where each key is a spectrum ID, and the value is an object with the following structure: You need to create a new function to slice the contours (which include all the levels) based on the current zoom level nmrium/src/component/2d/ft/Contours.tsx Lines 80 to 86 in f2d29f9
|
It is not feasible for big files to calculate all the levels at once. |
@jobo322 Yes this is what I was afraid of. What is the maximal number of points in both directions that seems reasonable to you ? This would make things much more complex because we can not simply cache the results as we were thinking before. If we can cache currently the full analysis we should merge this PR once Michael has validated it. We will then think in another issue how to reduce the resolution and have 'quadrants' or something like that when data is too big. I may help on this and add some methods in ml-spectra-processing. |
This comment just for documentation. I did many some test with this file: That seems to me relatively big (2048 x 1024). I also did a lot of benchmarks creating many contours very close to the noise: https://github.com/mljs/conrec/tree/main/src/__tests__/data This specific testcase generates 314'512'788 elements in lines array and generates 101 levels. The process on my mac m2 takes 8.4s. In reality we should only calculate 10 levels at a time and it should not be such an extreme case. With the new approach we should therefore wait max 1s the first time it is being calculated. |
@hamed-musallam Are you working on this PR ? Currently it does not work at all seems to me and is very slow. |
No, but I believe Alejandro is working on this PR. It's better to keep it open to preserve the comments, and we can reset the branch to the main |
You may also chedk: |
@jobo322 Could you try to rebase this branch ? |
Will be done in: |
No description provided.