Involuntary processing of social dominance cues from bimodal face-voice displays

Cogn Emot. 2018 Feb;32(1):13-23. doi: 10.1080/02699931.2016.1266304. Epub 2016 Dec 21.

Abstract

Social-rank cues communicate social status or social power within and between groups. Information about social-rank is fluently processed in both visual and auditory modalities. So far, the investigation on the processing of social-rank cues has been limited to studies in which information from a single modality was assessed or manipulated. Yet, in everyday communication, multiple information channels are used to express and understand social-rank. We sought to examine the (in)voluntary nature of processing of facial and vocal signals of social-rank using a cross-modal Stroop task. In two experiments, participants were presented with face-voice pairs that were either congruent or incongruent in social-rank (i.e. social dominance). Participants' task was to label face social dominance while ignoring the voice, or label voice social dominance while ignoring the face. In both experiments, we found that face-voice incongruent stimuli were processed more slowly and less accurately than were the congruent stimuli in the face-attend and the voice-attend tasks, exhibiting classical Stroop-like effects. These findings are consistent with the functioning of a social-rank bio-behavioural system which consistently and automatically monitors one's social standing in relation to others and uses that information to guide behaviour.

Keywords: Social rank; Stroop; facial and vocal processing; involuntary processing; social dominance.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Communication
  • Cues
  • Face*
  • Female
  • Humans
  • Male
  • Social Dominance*
  • Stroop Test
  • Voice*
  • Young Adult