Emotion recognition from posed and spontaneous dynamic expressions: Human observers versus machine analysis

Emotion. 2021 Mar;21(2):447-451. doi: 10.1037/emo0000712. Epub 2019 Dec 12.

Abstract

The majority of research on the judgment of emotion from facial expressions has focused on deliberately posed displays, often sampled from single stimulus sets. Herein, we investigate emotion recognition from posed and spontaneous expressions, comparing classification performance between humans and machine in a cross-corpora investigation. For this, dynamic facial stimuli portraying the six basic emotions were sampled from a broad range of different databases, and then presented to human observers and a machine classifier. Recognition performance by the machine was found to be superior for posed expressions containing prototypical facial patterns, and comparable to humans when classifying emotions from spontaneous displays. In both humans and machine, accuracy rates were generally higher for posed compared to spontaneous stimuli. The findings suggest that automated systems rely on expression prototypicality for emotion classification and may perform just as well as humans when tested in a cross-corpora context. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

MeSH terms

  • Adolescent
  • Adult
  • Artificial Intelligence / standards*
  • Behavior Observation Techniques / methods*
  • Emotions / physiology*
  • Facial Expression*
  • Female
  • Humans
  • Male
  • Recognition, Psychology / physiology*
  • Young Adult