You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to use the rsatoolbox to reproduce RDMs provided from meadows-research.com, where we used the Multi-Arrangement Task to collect data and got the readily computed RDMs. I realized there may be a difference on how the rsatoolbox and the meadows-research.com deal with missing data when computing RDMs from the trials of multi-arrangement task. I wonder how are the two algorithms differ, and whether this difference is caused by randomness or by some important assumptions made. This is crucial to our experiment because we will be computing RDMs after excluding some trials' data.
data used:
Please see this Google folder for "data_demo.json" from 110 participants (data cleaned from the .json data files provided by meadows-research.com). Each participant worked on the Multi-Arrangment Task for the same 20 images.
what I have tried:
Here's a Google Colab notebook where you will find descriptions of the data structure and a demonstration of how I used rsatoolbox to try to reproduce the RDM for each participant. The Google Colab notebook will need the "data_demo.json" file to run. In the notebook, I tried to compare the rsatoolbox results with the RDMs provided by meadows-research.com. Out of the 110 participants, only 107 participants' RDMs reproduced exactly. The rest of the 3 participants' RDMs from rsatoolbox differ from those of meadows-research.com. Noted that only these three participants had missing data in their RDMs.
experimental design:
Each particpant works on the Multi-arrangement task for the same 20 images until they've reached min-evidence of 0.5 or for 25 minutes maximum. The maximum number of images per trial is set to 8, so the participant did not work on all 20 images during the first trial. Thus, it is possible that some participants did not get to work on all pairwise comparisons of the images, hence the missing data in their RDMs.
The text was updated successfully, but these errors were encountered:
Hi Yue!
Thanks for posting here and your detailed notes and code. I agree that the missing values and the size of the arena are key in the comparisons. I want these to be working the same on both versions. I'll keep you updated
Hello,
I'm trying to use the rsatoolbox to reproduce RDMs provided from meadows-research.com, where we used the Multi-Arrangement Task to collect data and got the readily computed RDMs. I realized there may be a difference on how the rsatoolbox and the meadows-research.com deal with missing data when computing RDMs from the trials of multi-arrangement task. I wonder how are the two algorithms differ, and whether this difference is caused by randomness or by some important assumptions made. This is crucial to our experiment because we will be computing RDMs after excluding some trials' data.
data used:
Please see this Google folder for "data_demo.json" from 110 participants (data cleaned from the .json data files provided by meadows-research.com). Each participant worked on the Multi-Arrangment Task for the same 20 images.
what I have tried:
Here's a Google Colab notebook where you will find descriptions of the data structure and a demonstration of how I used rsatoolbox to try to reproduce the RDM for each participant. The Google Colab notebook will need the "data_demo.json" file to run. In the notebook, I tried to compare the rsatoolbox results with the RDMs provided by meadows-research.com. Out of the 110 participants, only 107 participants' RDMs reproduced exactly. The rest of the 3 participants' RDMs from rsatoolbox differ from those of meadows-research.com. Noted that only these three participants had missing data in their RDMs.
experimental design:
Each particpant works on the Multi-arrangement task for the same 20 images until they've reached min-evidence of 0.5 or for 25 minutes maximum. The maximum number of images per trial is set to 8, so the participant did not work on all 20 images during the first trial. Thus, it is possible that some participants did not get to work on all pairwise comparisons of the images, hence the missing data in their RDMs.
The text was updated successfully, but these errors were encountered: