-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stats: Question 2 #20
Comments
To do:
|
Reliability between measured value and ground truth. Indices of reliability --> Section 2.2.3 here
|
@jvelazquez-reyes the "Maybe do a stats comparison first and determine that the difference is null before doing that choice" box is checked, did you do that or was it checked by accident? |
@mathieuboudreau I have a piece of code (t-test), implemented in R, to determine if there is a significant difference between complex and magnitude. Regarding to:
I was also thinking of something to assess a statistical difference within groups. For this, I have another piece of code to perform a one-way ANOVA, and post-hoc tests if needed. I think I can do the stats I have so far with the database you updated. |
@jvelazquez-reyes sorry for the delayed response; I received a new macbook last Friday which has the new Apple M1 chip, which lead to some incompatibility requirements that I needed to resolve; see new section in the README. I was able to hack some of my old plotting scripts to figure out how to correct the datasets for temperature (well actually, correct the reference T1 values not the measured data itself), and to see if the trend in the corrected data goes the way we expected. Here is a showing the errors in T1 values for the measurements on spheres with serial numbers >= 42. Red is the uncorrected data, blue is the corrected data. As you can see, for most data points, the blue (corrected data) leads to reduced errors, which is a encouraging trend! I'll clean up the notebooks and push them, and then we'll need to integrate the correction in your analyses in a similar way. Also, as noted, we only have corrected values for the serial numbers >=42. I don't recall if we had the data for the other serial numbers, I'll take a look and if so we should integrate that temperature data in the NIST.py script. |
@mathieuboudreau that plot looks great! I think I have already integrated the correction into the analyses, but I'm looking forward to seeing the notebooks and verifying I did it correctly. If you have the data for the other serial numbers, please send it to me to integrate it in the NIST.py script. |
I emailed the folks that create the phantom to ask for the data, if I don't hear from them soon then maybe we can just skip it since it doesn't appear to be a major source of error here even if the correction does improve the data slightly. |
@mathieuboudreau I changed the way to render the plots. In the image attached you can see two plots in the main panel. The one placed at the top shows the difference between magnitude and complex with an option to display the absolute or percentage difference. Further, selecting the sites in the sidebar panel, we can overlay multiple lines (sites) within the same graph to show the results from different sites at the same time. At the bottom, a dispersion plot is displayed. In this case, I think that showing a single site at a time is more convenient because we have the regression line and the data points. Finally, I added a table in the sidebar panel showing the correlation coefficient and p-value. |
@mathieuboudreau I received your email. So, the temperature correction tool we have is going to be applied to phantom versions >=42 and <42. I was reading the email thread and also saw the recommendation to apply a fitting to the data in a log-log representation. What I did was this: I added and set as default, a new interpolation option which is quadratic (low-order polynomial). I interpolated the data, already transformed to a log representation (base 10), using a I also updated the JN with this new feature. I attached this image where you can see that the new interpolation is a little different from the previous ones (cubic and cubic-spline, which were identical), and perhaps slightly better at high T1 values, and at low and high temperatures. Is this the correct way to proceed to what Katy recommended? I'm about to push these new changes so that you can see the details, or you can ask me as well. Ohh, I was forgetting to mention that I'm getting a |
From #16
Question 2: Take 1 scan from each submission (or even site, maybe) and compare them together to determine if T1 values for each sphere agrees with reference pretty decently.
The text was updated successfully, but these errors were encountered: