Recognizing people from dynamic and static faces and bodies: dissecting identity with a fusion approach

Vision Res. 2011 Jan;51(1):74-83. doi: 10.1016/j.visres.2010.09.035. Epub 2010 Oct 20.

Abstract

The goal of this study was to evaluate human accuracy at identifying people from static and dynamic presentations of faces and bodies. Participants matched identity in pairs of videos depicting people in motion (walking or conversing) and in "best" static images extracted from the videos. The type of information presented to observers was varied to include the face and body, the face-only, and the body-only. Identification performance was best when people viewed the face and body in motion. There was an advantage for dynamic over static stimuli, but only for conditions that included the body. Control experiments with multiple-static images indicated that some of the motion advantages we obtained were due to seeing multiple images of the person, rather than to the motion, per se. To computationally assess the contribution of different types of information for identification, we fused the identity judgments from observers in different conditions using a statistical learning algorithm trained to optimize identification accuracy. This fusion achieved perfect performance. The condition weights that resulted suggest that static displays encourage reliance on the face for recognition, whereas dynamic displays seem to direct attention more equitably across the body and face.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Algorithms
  • Communication
  • Face*
  • Facial Expression
  • Humans
  • Motion Perception*
  • Pattern Recognition, Visual
  • Photic Stimulation / methods
  • ROC Curve
  • Recognition, Psychology*
  • Videotape Recording
  • Walking