You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In neurostuff/NiMARE#466, @nicholst and @tyarkoni note that maskers which aggregated values across voxels before fitting the meta-analytic model will likely produce biased results, depending on the meta-analytic model. We should systematically evaluate the different estimators across a range of datasets.
Additional details
@tyarkoni has performed some simulations, and did not find large bias across approaches for the non-combination, non-likelihood estimators (e.g., Hedges, WeightedLeastSquares, DerSimonianLaird, and probably PermutedOLS). The combination test estimators (Fishers and Stouffers) are probably heavily biased. The likelihood-based estimators (SampleSizeBasedLikelihood and VarianceBasedLikelihood) may or may not be biased.
OLS - We ignore ROI variances and the weighting is tau^2+const (no weighting), worst case is inefficiency and (as per Mumford & Nichols) no FPR risk for one-sample or balanced two-sample comparisons. (However, the M&N result was calibrated against heterogeneity seen in task fMRI, and not N=10 <-> N=1200 differences)
GLS - We take average ROI variances as "correct", but they're actually too small, so weighting is tau^2 + TooSmallVar_i... I think this is OK as the estimated tau^2 will make up for the vars being too small over all, so inferences are probably fine, but just not as efficient as they could be. Another plus is that this approach will capture gross differences in sample size, something important if N's have a big range.
Analysis plan (tentative)
Collect subject-level data (z, p, beta, and varcope maps) from a range of datasets.
We can collect these data from Neuroscout.
Generate a range of dataset-level results with resampling.
Generate subset results with varying sample sizes.
Vary smoothness as well.
Run voxel-wise image-based meta-analyses, then average results across ROIs.
Run ROI-wise image-based meta-analyses.
Compare results of both approaches. The former is the ground truth.
As the most basic test, we can perform pair-wise comparisons between the analysis-first and the aggregation-first results from the same estimators and datasets.
We can also dig into dataset parameters/characteristics, which might clarify what sources of bias there are.
Parameters to investigate:
Sample size characteristics (e.g., mean sample size, or perhaps through holding dataset sample sizes constant in some analyses?)
Smoothness
Original contrast variance levels?
The text was updated successfully, but these errors were encountered:
Summary
In neurostuff/NiMARE#466, @nicholst and @tyarkoni note that maskers which aggregated values across voxels before fitting the meta-analytic model will likely produce biased results, depending on the meta-analytic model. We should systematically evaluate the different estimators across a range of datasets.
Additional details
@tyarkoni has performed some simulations, and did not find large bias across approaches for the non-combination, non-likelihood estimators (e.g., Hedges, WeightedLeastSquares, DerSimonianLaird, and probably PermutedOLS). The combination test estimators (Fishers and Stouffers) are probably heavily biased. The likelihood-based estimators (SampleSizeBasedLikelihood and VarianceBasedLikelihood) may or may not be biased.
@nicholst proposed the following options:
tau^2+const
(no weighting), worst case is inefficiency and (as per Mumford & Nichols) no FPR risk for one-sample or balanced two-sample comparisons. (However, the M&N result was calibrated against heterogeneity seen in task fMRI, and not N=10 <-> N=1200 differences)tau^2 + TooSmallVar_i
... I think this is OK as the estimatedtau^2
will make up for the vars being too small over all, so inferences are probably fine, but just not as efficient as they could be. Another plus is that this approach will capture gross differences in sample size, something important if N's have a big range.Analysis plan (tentative)
The text was updated successfully, but these errors were encountered: