Human-Object Interactions Are More than the Sum of Their Parts

Cereb Cortex. 2017 Mar 1;27(3):2276-2288. doi: 10.1093/cercor/bhw077.

Abstract

Understanding human-object interactions is critical for extracting meaning from everyday visual scenes and requires integrating complex relationships between human pose and object identity into a new percept. To understand how the brain builds these representations, we conducted 2 fMRI experiments in which subjects viewed humans interacting with objects, noninteracting human-object pairs, and isolated humans and objects. A number of visual regions process features of human-object interactions, including object identity information in the lateral occipital complex (LOC) and parahippocampal place area (PPA), and human pose information in the extrastriate body area (EBA) and posterior superior temporal sulcus (pSTS). Representations of human-object interactions in some regions, such as the posterior PPA (retinotopic maps PHC1 and PHC2) are well predicted by a simple linear combination of the response to object and pose information. Other regions, however, especially pSTS, exhibit representations for human-object interaction categories that are not predicted by their individual components, indicating that they encode human-object interactions as more than the sum of their parts. These results reveal the distributed networks underlying the emergent representation of human-object interactions necessary for social perception.

Keywords: MVPA; action perception; cross-decoding; fMRI; scene perception.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Adult
  • Brain / diagnostic imaging
  • Brain / physiology*
  • Brain Mapping
  • Female
  • Humans
  • Magnetic Resonance Imaging
  • Male
  • Neuropsychological Tests
  • Pattern Recognition, Visual / physiology*
  • Photic Stimulation
  • Social Perception*
  • Young Adult