Integrating visual and tactile information in the perirhinal cortex

Cereb Cortex. 2009 Dec;19(12):2993-3000. doi: 10.1093/cercor/bhp073. Epub 2009 Apr 22.


By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual-tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual-tactile) than unimodal (visual-visual or tactile-tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual-tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Brain Mapping*
  • Entorhinal Cortex / physiology*
  • Evoked Potentials, Somatosensory / physiology*
  • Evoked Potentials, Visual / physiology*
  • Female
  • Humans
  • Male
  • Touch / physiology*
  • Visual Perception / physiology*