Jump to content

User:Dfleury1/sandbox

From Wikipedia, the free encyclopedia
Statistical parametric map with voxel activations on an fMRI scan.
Statistical parametric map with voxel activations on an fMRI scan.

In functional magnetic resonance imaging (fMRI) data processing, second-level Bayesian inference refers to the application of Bayes factors (BFs) as indicators for second-level analysis/group analysis in examining regional brain activity. Bayesian inference has emerged as a competing alternative to p-values and frequentist explorations of type I and type II errors, primarily due to how Bayes Factors enable experimenters to measure statistical evidence given prior parameters and evidence rather than assuming a fixed pre-set of parameters. Due to the notoriety of high false positive activations during fMRI post-processing, leading techniques including Bonferroni correction and False Discovery Rates (FDRs) have been implemented to minimize type I errors.[1] However, recent reports suggest that these frequentist toolkits may be too liberal or harsh in controlling for type I errors, proposing random field theory (RFT) familywise error correction (FWE)-applied voxel-wise thresholding as an appropriate balance. Nevertheless, follow-up discussions suggest that even popular methods including RFT do not attain sufficient significance levels, instead inflating false positive rates.

Since Bayesian inference does not presume parameter values, or effect sizes, as precisely equal to a definite value, type I/II error interpretations no longer have utility. Instead, by applying Bayes factors, the effect size can be expressed as uncertainty in terms of a probability distribution based on a prior distribution, incoming data, and its updated posterior distribution on parameters. Ultimately, a Bayesian framework is free from presumptions about effects that are certainly zero or null hypotheses, rendering it less vulnerable to inflated false positive rates. In contrast to a P-value, which quantifies the probability that one will observe values of a test statistic that are as more or less extreme than observed results, a Bayesian framework permits researchers to express a posterior uncertainty of voxel activity hypotheses. The capacity for Bayes Factors as a tool to accept hypotheses, including null, based on a ratio of posterior probabilities between (null hypothesis) and allows scientists to more readily make a decision on how to accept the null of the alternative[2][3].

Bayesian vs. Frequentist Approaches to fMRI Statistical Analysis

[edit]
Graphs for a gaussian distribution using Bayes theorem. Prior distribution p(x) (blue) m0=20, λ0=1 and the likelihood p(y∣x) (red) mD=25 and λD=3 , the posterior p(x∣y).
Graphs for a gaussian distribution using Bayes theorem. Prior distribution p(x) (blue) m0=20, λ0=1 and the likelihood p(y∣x) (red) mD=25 and λD=3 , the posterior p(x∣y).

Recent approaches to minimizing false positive rates during fMRI data analysis include preventative methods such as Bonferroni correction and controlling for False Discovery Rates (FDRs).[1] Although alternative techniques such as Random Field Theory (RFT) familywise error correction (FWE)-applied voxel-wise thresholding have been proposed to statistically resolve inflated type I or type II error allowance in the aforementioned techniques, they too have been demonstrated to not attain claimed nominal significance levels during analytical procedures on software such as FSL, SPM, and AFNI.[4][3] Recent discussions suggest that Bayesian approaches in terms of multiple comparison correction ought to be an absolute replacement for frequentist error-correction methods, including thresholding techniques based on P-values configuration. Frequentist and error-oriented frameworks assume fixed parameters with unknown constraints, where type I and type II errors are limited to fixed effect sizes. Considering that effect sizes are not random, frequentists assume them to be either zero or not. However, a Bayesian framework configures parameters as a random probability distribution, expressing uncertainty about effect sizes. Its prior distribution is updated with incoming evidence, generating a posterior distribution on parameters.[3]

Bayes’ theorem underpins Bayesian inference, revealing that a prior distribution of observational data is transformed into a posterior distribution with the introduction of new evidence and data. Bayes theorem can be formulated as follows:

where is the posterior distribution of (hypothesis) given (incoming evidence/data). denotes the likelihood of the data, is the prior distribution and represents the marginal probability, a normalizing constant of the numerator. In the context of hypothesis testing with, for example, two mutually exclusive and exhaustive hypotheses, and , a ratio of their posterior distributions can be obtained . Ultimately, the Bayes factor equates to a ratio of the amount of evidence provided by the data distribution for and .[5] By extending this conception of the Bayes factor, using a critical ratio between the value of evidence in and , the method can be extended in terms of fMRI voxel activation analyses. During fMRI analysis, a null hypothesis, , characterizes whether significant activity exists within a given voxel following one-sample t-tests performed in the group study. Correspondingly, would demonstrate that the activity in the voxel for condition A does not deviate from the voxel activity in condition B (i.e. not greater or smaller). In contrast, shows that activity in a given voxel for a condition is either greater or smaller than condition B’s voxel activity. These hypotheses can be framed in terms of Bayesian analysis via Bayes factors, where and reveal how strongly observed neuroimaging data directly supports instead of in a given voxel. Proponents of Bayes Factors as a replacement for P-values, in order to assess hypothesis certainty, posit the following arguments:

A graph of two different normal curves representing null and alternative distribution. Two normal distributions have different means/locations, but same variances/scales
  • Bayes Factors provide more straightforward interpretations than P-values by directly relating the posterior distributions of observational data that support and during data analysis.
  • P-values do not directly quantify the likelihood of or , which allows posterior probabilities in Bayes Factors to be more powerful, especially in less powerful sample sizes.
  • Bayes Factors empower researchers to accept the null hypothesis, contrasting to the statistical obligation of P-values. P-values restrict researchers to either rejecting or not rejecting the null hypothesis. However, Bayes factors can be used to accept a null hypothesis.
  • By utilizing appropriate priors, notorious difficulties with P-hacking, a method that exploits manual probability/chance thresholding to generate acceptably small P-values can be avoided.

Implementation with Statistical Parametric Mapping (SPM) Software

[edit]

Second-level Bayesian inference can be post-processed on fMRI datasets via several software methods such as the statistical parametric mapping-12 (SPM-12) and FMRIB’s Software Library (FSL) based on Markov Chain Monte Carlo (MCMC) sampling techniques. Cognitive neuroscience researchers have deployed Bayesian analysis primarily for dynamic causal modeling, parameter estimation, and first-level analysis. At a high level, end-to-end second-level Bayesian analysis on SPM-12 can be expressed in the following systematic steps[3][6]:

  1. 4-Dimensional Dataset collection via either private fMRI patient experimentation or opensource data repositories such as OpenfMRI and OpenNeuro.
  2. Pre-processing and first-level analysis: Assuming that raw dataset collection and structuring have already been fulfilled, researchers will typically proceed with preprocessing first-level analysis on scanned images. First, image corrective algorithms can be run via custom scripts or SPM-12 preprocessing packages in order to minimize lifetime scanning artifacts. Following artifact correction, slice time correction, motion correction, co-registration with structural images, normalization, and spatial smoothing is performed. The first-level analysis comprises creating an individual general linear model (GLM) for each subject in the dataset by generating statistical parametric maps (SPMs). SPMs indicate the strength of correlation between the time series (comprised of onset times convolved with the hemodynamic response function (HRF)) and the corresponding time series collected during experimentation.
  3. Second-level analysis (Group-Level): In SPM-12 software, a Bayesian second-level inference module can be deployed by configuring the output from the classical inference module. That is, a classical Bayesian inference model can be used as an input for Bayesian second-level inference. A factorial design specification must be added to SPM-12’s batch editor with one-sample t-tests enabled as a design method. Following design specification, a “model estimation” module should be added with classical inference activated for model fitting. An additional “model estimation” module should be added in the batch editor, this time using deploying second-level Bayesian inference.
  4. Results collection and interpretation via thresholding: Crucially, the results extracted from Bayesian second-level inference can be thresholded with a criterion that configures effect size and a natural logarithm threshold for the Bayes Factor.

Comparisons to Other Inference Methods

[edit]

In a study utilizing classical inference and Bayesian inference model estimation techniques to analyze a prepared open-source dataset across varying moral conditions, generally higher sensitivity is found in second-level Bayesian analysis[3]. Particularly, Bayesian second-level analysis provides more sensitive results than voxelwise FWE inference while resulting in more conservative sensitivity results than cluster-wise FWE inference. It is found that these results may suggest Bayesian inference as a more competitive statistical tool in controlling false positive rates better than clusterwise FWE while maintaining higher sensitivity scores than voxelwise FWE inference. Moreover, the analysis of several results following comparisons between (i.e. the ratio of voxels marked as active from analysis of noise-added images but as inactive from analysis of original images to voxels marked as active from analysis of noise-added image) and hit rates (i.e. the ratio of voxels marked as active resulting from both analyses to voxels marked as active from analysis of the original images). Bayesian inference outperforms clusterwise inference in terms of false alarms and shows more sensitive performance in hit rate analysis than voxelwise FWE inference. Since second-level Bayesian inference can produce comparatively lower false alarm rates and higher hit rates, the approach can more effectively reproduce results from fMRI images devoid of noise regardless of whether random noise is added. Additionally, following a comparative analysis of t-statistics during data validation, it is found that Bayesian inference may be more robust to variations in sample size fluctuations than classical inference.

Bayesian Inference as an Alternative to P-Values

[edit]

The typical p threshold for publication, p<0.05, has created concerns among brain and psychological sciences research due to its incapacity to provide strong correlative evidence in directly supporting H1. Although previous researchers have alternatively argued for a p threshold of p<0.005, this would require fMRI studies to increase the sample size by at least 60%.[7] However, the use of the logBF value allows cognitive neuroscientists to compute uncertainty and the strength of our belief about the presence of evidence supporting H1 with available data. Crucially, a p-value does not permit this framework of hypothesis-evidence backed uncertainty analysis. Rather, p-values quantify the unusualness of observed data under the null hypothesis which leaves open the possibility that the data itself are more likely under a well-specified and alternative hypothesis. This results in a classical inference that employs multiple tests, begin more susceptible to inflated false positives. [8]

Meta-Analysis and Prior Distributions

[edit]

Several drawbacks concerning Bayesian fMRI analysis have been discussed in neuro-informatics literature, including its inability to directly address false positive rates among second-level/group analysis voxelwise analysis.[9] Previously proposed solutions have attempted to execute multiple comparison correction in voxelwise fMRI analysis via adjustments of prior distributions based on the given population of voxels to be tested.[10][11] However, since even a minor adjustment to the prior distribution can proportionately affect posterior probabilities and outcomes from Bayesian analysis, it becomes increasingly crucial to create well-constructed, informative, and objective priors. In pursuit toward constructing more robust prior distributions, reported solutions have determined that image-based meta-analyses of relevant previous fMRI studies could provide critical prior information that accurately informs second-level analysis outcomes during voxel-wise Bayesian inference. Image-based meta-analysis was indeed found to significantly improve posterior outcomes on the basis of sensitivity and selectivity scores. However, recent conversations also observe the practical issues of implementing image-based meta-analyses due to the lack of diversified and topic-relevant statistical image-sharing data at scale in order to perform image-based meta-analyses.[10]

Due to the scarcity of large-scale and topic-relevant image-based datasets for second-level fMRI investigations, experimenters have pursued coordinate-based meta-analysis in order to enhance the performance of Bayesian analysis. Modified methodologies that employ coordinate-based meta-analysis have produced a comparable performance to image-based meta-analysis implementations.[10] Notably, previous investigations employing combined meta-analysis and Bayesian analysis on fMRI data demonstrated that these inferential models could be deployed in order to localize neurofunctional areas responsible for response inhibition in the brain. Particularly, response inhibition has been found to be associated with a non-selective mechanism that may be triggered by any imperative stimuli, including Go stimuli, during uncertain contexts. Past Go/NoGo designs in neuroimaging literature have typically employed classical null hypothesis significance testing, disabling researchers to accept the null hypothesis. However, a Bayesian meta-analysis driven approach for group-level activity demonstrated results that extend beyond previous frequentist analyses, in that selectivity implies that inhibition is triggered only by inhibitory stimuli. Instead, they observed an overlap between response inhibition areas and areas demonstrating the “practical equivalence of neuronal activity located in the right dorsolateral prefrontal cortex, parietal cortex, preomotor cortex, and left inferior frontal gyrus.”[12]

References

[edit]
  1. ^ a b Benjamini, Yoav; Hochberg, Yosef (1995). "Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing". Journal of the Royal Statistical Society: Series B (Methodological). 57 (1): 289–300. doi:10.1111/j.2517-6161.1995.tb02031.x.
  2. ^ Cite error: The named reference :3 was invoked but never defined (see the help page).
  3. ^ a b c d e Cite error: The named reference :1 was invoked but never defined (see the help page).
  4. ^ Cite error: The named reference :2 was invoked but never defined (see the help page).
  5. ^ Kass, Robert E.; Raftery, Adrian E. (1995-06-01). "Bayes Factors". Journal of the American Statistical Association. 90 (430): 773–795. doi:10.1080/01621459.1995.10476572. ISSN 0162-1459.
  6. ^ "fMRI Tutorial #7: 2nd-Level Analysis — Andy's Brain Book 1.0 documentation". andysbrainbook.readthedocs.io. Retrieved 2022-11-28.
  7. ^ Button, Katherine S.; Ioannidis, John P. A.; Mokrysz, Claire; Nosek, Brian A.; Flint, Jonathan; Robinson, Emma S. J.; Munafò, Marcus R. (2013). "Power failure: why small sample size undermines the reliability of neuroscience". Nature Reviews Neuroscience. 14 (5): 365–376. doi:10.1038/nrn3475. ISSN 1471-0048.
  8. ^ Wagenmakers, Eric-Jan; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Love, Jonathon; Selker, Ravi; Gronau, Quentin F.; Šmíra, Martin; Epskamp, Sacha; Matzke, Dora; Rouder, Jeffrey N.; Morey, Richard D. (2018-02-01). "Bayesian inference for psychology. Part I: Theoretical advantages and practical ramifications". Psychonomic Bulletin & Review. 25 (1): 35–57. doi:10.3758/s13423-017-1343-3. ISSN 1531-5320. PMC 5862936. PMID 28779455.{{cite journal}}: CS1 maint: PMC format (link)
  9. ^ Han, Hyemin (2022). "A Novel Method to Use Coordinate Based Meta-Analysis to Determine a Prior Distribution for Voxelwise Bayesian Second-Level fMRI Analysis". Mathematics. 10 (3): 356. doi:10.3390/math10030356. ISSN 2227-7390.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  10. ^ a b c Han, Hyemin (2020-07-02). "Implementation of Bayesian multiple comparison correction in the second-level analysis of fMRI data: With pilot analyses of simulation and real fMRI datasets based on voxelwise inference". Cognitive Neuroscience. 11 (3): 157–169. doi:10.1080/17588928.2019.1700222. ISSN 1758-8928. PMID 31855500.
  11. ^ de Jong, Tim (2019). "A Bayesian Approach to the Correction for Multiplicity". The Society for the Improvement of Psychological Science.
  12. ^ Masharipov, Ruslan; Korotkov, Alexander; Medvedev, Svyatoslav; Kireev, Maxim (2022-06-16). "Evidence for non-selective response inhibition in uncertain contexts revealed by combined meta-analysis and Bayesian analysis of fMRI data". Scientific Reports. 12 (1): 10137. doi:10.1038/s41598-022-14221-x. ISSN 2045-2322.