Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2009;4(1):e4205.
doi: 10.1371/journal.pone.0004205. Epub 2009 Jan 15.

Multisensory oddity detection as bayesian inference

Affiliations

Multisensory oddity detection as bayesian inference

Timothy Hospedales et al. PLoS One. 2009.

Abstract

A key goal for the perceptual system is to optimally combine information from all the senses that may be available in order to develop the most accurate and unified picture possible of the outside world. The contemporary theoretical framework of ideal observer maximum likelihood integration (MLI) has been highly successful in modelling how the human brain combines information from a variety of different sensory modalities. However, in various recent experiments involving multisensory stimuli of uncertain correspondence, MLI breaks down as a successful model of sensory combination. Within the paradigm of direct stimulus estimation, perceptual models which use Bayesian inference to resolve correspondence have recently been shown to generalize successfully to these cases where MLI fails. This approach has been known variously as model inference, causal inference or structure inference. In this paper, we examine causal uncertainty in another important class of multi-sensory perception paradigm--that of oddity detection and demonstrate how a Bayesian ideal observer also treats oddity detection as a structure inference problem. We validate this approach by showing that it provides an intuitive and quantitative explanation of an important pair of multi-sensory oddity detection experiments--involving cues across and within modalities--for which MLI previously failed dramatically, allowing a novel unifying treatment of within and cross modal multisensory perception. Our successful application of structure inference models to the new 'oddity detection' paradigm, and the resultant unified explanation of across and within modality cases provide further evidence to suggest that structure inference may be a commonly evolved principle for combining perceptual information in the brain.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. Standard sensor fusion model.
Bar size y is inferred on the basis of haptic and visual observations formula image .
Figure 2
Figure 2. Schematic of visual-haptic height oddity detection experimental task from .
Subjects must choose the odd probe stimulus based on haptic (textured bars) and visual (plain bars) observation modalities. a) Probe stimulus is the same as the standard stimuli: detection at chance level. b) Probe stimulus bigger than standard: detection is reliable. c) Haptic and visual probe modalities are discordant: detection rate will depend on cue combination strategy.
Figure 3
Figure 3. Oddity detection predictions of the naive cue combination models.
(a) Detection based on individual cues only. (b) Detection based on a single fused estimate formula image. (c) Detection based on both individual cues and a single fused estimate. Shaded areas indicate regions below threshold probability of correct detection. The standard stimulus formula image is indicated by a blue dot in the centre of each plot. formula image indicate uni-modal visual and haptic thresholds respectively. Coloured lines indicate multi-modal detection rate contours.
Figure 4
Figure 4. Oddity detection predictions and experimental results.
Experimental data for two sample subjects from . (a) Visual-haptic experiment. (b) Texture-disparity experiment. Red lines: Observed uni-modal discrimination thresholds. Green lines: Discrimination threshold predictions assuming mandatory fusion. Magenta points: Discrimination threshold observed experimentally.
Figure 5
Figure 5. Graphical model for oddity detection by model selection.
Three possible models, indexed by o, corresponding to each possible assignment of oddity. To compute the stimulus most likely to be odd, compute the evidence for each model formula image. Standard and probe stimulus values formula image are not directly requested of the subjects, and are only computed indirectly in the process of evaluating the model likelihoods.
Figure 6
Figure 6. Oddity detection predictions of model selection approach.
Oddity detection performance (grey-scale) as a function of probe value for the model selection approach (Fig. 5). Compare the 66% contours (lines) with human performance (dots). Model still predicts an infinite region of non-detection along the cues-discordant diagonal. (a) Across modality visual-haptic experiment. (b) Within modality texture-disparity experiment. Illustrative points correctly (diamond) and incorrectly (cross) classified by model (see text for details).
Figure 7
Figure 7. Graphical model for oddity detection via structure inference.
Three possible assignments of oddity correspond to three possible models indexed by o = 1,2,3. The uncertainty about common causal structure of the probe stimulus is now represented by C, which is computed in the process of evaluating the likelihood of each model o.
Figure 8
Figure 8. Oddity detection predictions of structure inference approach.
(a,b) Oddity detection rate predictions for an ideal Bayesian observer (grey-scale background) using a variable structure model (Fig. 7); Oddity detection contours of the model (blue lines) and human (magenta points) are overlaid with the model prediction from (green lines); Chance = 33%. (c,d) Fusion report rates for ideal observer using variable structure model. Chance = 50%. Across modality conditions are reported in (a,c) and within modality conditions are reported in (b,d).
Figure 9
Figure 9. New predictions by the ideal Bayesian observer using the variable structure model.
(a,b) Detection rate for trials where fusion was reported (Chance = 33%). (c,d) Detection rate for trials where fission was reported (Chance = 33%). Across-modality condition in (a,c), within modality condition in (b,d). Blue lines indicate contours of detection threshold (66%).

Similar articles

Cited by

References

    1. Kersten D, Mamassian P, Yuille A. Object perception as bayesian inference. Annual Review of Psychology. 2004;55:271–304. - PubMed
    1. Landy MS, Maloney LT, Johnston EB, Young M. Measurement and modeling of depth cue combination: in defense of weak fusion. Vision Res. 1995;35:389–412. - PubMed
    1. Alais D, Burr D. The ventriloquist effect results from near-optimal bimodal integration. Curr Biol. 2004;14(3):257–262. - PubMed
    1. Battaglia PW, Jacobs RA, Aslin RN. Bayesian integration of visual and auditory signals for spatial localization. J Opt Soc Am A Opt Image Sci Vis. 2003;20(7):1391–1397. - PubMed
    1. Ernst MO, Banks MS. Humans integrate visual and haptic information in a statistically optimal fashion. Nature. 2002;415:429–433. - PubMed

Publication types