Facial expression analysis with AFFDEX and FACET: A validation study

Behav Res Methods. 2018 Aug;50(4):1446-1460. doi: 10.3758/s13428-017-0996-1.

Abstract

The goal of this study was to validate AFFDEX and FACET, two algorithms classifying emotions from facial expressions, in iMotions's software suite. In Study 1, pictures of standardized emotional facial expressions from three databases, the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP), the Amsterdam Dynamic Facial Expression Set (ADFES), and the Radboud Faces Database (RaFD), were classified with both modules. Accuracy (Matching Scores) was computed to assess and compare the classification quality. Results show a large variance in accuracy across emotions and databases, with a performance advantage for FACET over AFFDEX. In Study 2, 110 participants' facial expressions were measured while being exposed to emotionally evocative pictures from the International Affective Picture System (IAPS), the Geneva Affective Picture Database (GAPED) and the Radboud Faces Database (RaFD). Accuracy again differed for distinct emotions, and FACET performed better. Overall, iMotions can achieve acceptable accuracy for standardized pictures of prototypical (vs. natural) facial expressions, but performs worse for more natural facial expressions. We discuss potential sources for limited validity and suggest research directions in the broader context of emotion research.

Keywords: AFFDEX; Emotion classification; FACET; FACS; Facial expression.

Publication types

  • Validation Study

MeSH terms

  • Adult
  • Algorithms
  • Behavioral Research / methods
  • Data Accuracy
  • Databases, Factual / standards*
  • Emotions / classification*
  • Facial Expression*
  • Female
  • Humans
  • Male
  • Software