A Bayesian spatiotemporal model for very large data sets

Neuroimage. 2010 Apr 15;50(3):1126-41. doi: 10.1016/j.neuroimage.2009.12.042. Epub 2009 Dec 21.


Functional MRI provides a unique perspective of neuronal organization; however, these data include many complex sources of spatiotemporal variability, which require spatial preprocessing and statistical analysis. For the latter, Bayesian models provide a promising alternative to classical inference, which uses results from Gaussian random field theory to assess the significance of spatially correlated statistic images. A Bayesian approach generalizes the application of these ideas in that (1) random fields are used to model all spatial parameters, not solely observation error, (2) their smoothness is optimized, and (3) a broader class of models can be compared. The main problem, however, is computational, due to the large number of voxels in a brain volume. Sampling methods are time-consuming; however, approximate inference using variational Bayes (VB) offers a principled and transparent way to specify assumptions necessary for computational tractability. Penny et al. (2005b) described such a scheme using a joint spatial prior and approximated the joint posterior density with one that factorized over voxels. However, a further computational bottleneck is encountered when evaluating the log model evidence used to compare models. This has lead to dividing a brain volume into slices and treating each independently. This amounts to approximating the spatial prior over a full volume with stacked 2D priors. That is, smoothness along the z-axis is not included in the model. Here we describe a VB scheme that approximates the zero mean joint spatial prior with a non-zero mean empirical prior that factors over voxels, thereby overcoming this problem. We do this by modifying the original VB algorithm of Penny et al. using the conditional form of a so-called conditional autoregressive (CAR) prior to update a marginal prior over voxels. We refer to this as a spatially-informed voxel-wise prior (SVP) and use them to spatially regularise general linear model (GLM) and autoregressive (AR) coefficients (over time to model serial correlations). This algorithm scales more favourably with the number of voxels providing a truly 3D spatiotemporal model over volumes containing tens of thousands of voxels. We compare the scaling of compute times with the number of voxels and performance with a joint prior applied to synthetic and single-subject data.

Publication types

  • Comparative Study
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Bayes Theorem*
  • Brain / physiology*
  • Computer Simulation
  • Humans
  • Imaging, Three-Dimensional / methods
  • Linear Models
  • Magnetic Resonance Imaging / methods*
  • Models, Statistical*
  • Normal Distribution
  • Regression Analysis
  • Signal Processing, Computer-Assisted*
  • Time Factors