Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
, 12 (5), e0177239
eCollection

Mapping the Emotional Face. How Individual Face Parts Contribute to Successful Emotion Recognition

Affiliations

Mapping the Emotional Face. How Individual Face Parts Contribute to Successful Emotion Recognition

Martin Wegrzyn et al. PLoS One.

Abstract

Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Example of tile weighting.
a, an example of four trials with the happy female face, where the unmasking sequence was stopped as illustrated, and a correct answer was given; b, the weights of all tiles for the happy female face, based on 16 trials of one participant. The weights are visualised with a green-red colour spectrum which is min-max-scaled, so lowest weights are green and highest weights are red.
Fig 2
Fig 2. Hand-drawn action units.
a, for each of the faces (except the two neutral ones), action units were hand-drawn as defined in the Facial Action Coding System. For example, the last face (male, surprised) has been labelled with action unit (AU) 1 “inner brow raiser” in dark green, AU 2 “outer brow raiser” in purple, AU 5 “upper lid raiser” in red and a compound AU 25+26+27: “lips part”, “jaw drop”, “mouth stretch” in light green; b, the same action units, as assigned to the 48 tiles into which each face is divided; please refer to S1 Fig and S2 Fig for a comprehensive list of labels for each face. Refer to S5 Code for a visualisation of all tile assignments.
Fig 3
Fig 3. Global metrics for all faces.
a, for each condition, the average percentage of revealed tiles needed until a correct response is given is plotted on the x-axis; the average percentage of correct responses in plotted on the y-axis; error bars illustrate 95% confidence intervals; b and c, percentage of all responses for female (b) and male (c) faces, including confusions. Correct responses are plotted in strong colours at the bottom of each bar. Incorrect responses are plotted in muted colours and are at the top of each bar; acronyms: hap, happy; ang, angry; sup, surprised; ntr, neutral; dis, disgust; fea, fear; sad, sad.
Fig 4
Fig 4. Visual illustration of tile weights for each face.
Tile weights, averaged over the whole participant sample. The weights are visualised with a green-red colour spectrum which is min-max-scaled within each face, so lowest weights are green and highest weights are red. These rescaled data are used for visualisation only.
Fig 5
Fig 5. Role of upper and lower face half.
Difference score “upper face half” minus “lower face half” for each of the 14 faces used in the experiment, with positive values indicating bigger importance of the upper face half and negative values indicating bigger importance of lower face half; a, for the female face; b, for the male face; error bars represent 95% confidence intervals.
Fig 6
Fig 6. Role of specific action units for emotion recognition.
Colours of each action unit as drawn on the face correspond to colours of bars, which are labelled with the respective action unit according to the facial action coding system. Values indicate importance of each action unit as compared to the baseline of all non-action unit tiles. Error bars represent 95% confidence intervals.
Fig 7
Fig 7. Principal component analysis of face weights.
a, percentage of explained variance by the first five principal components; b, visualisation of weights of each principal component in “tile space”, with positive weights in red and negative weights in blue; c, plotting each face in the space defined by the first two principal components, error bars represent 95% confidence intervals; d, same as c, but with the actual stimuli replacing the markers.
Fig 8
Fig 8. Representational similarity analysis (RSA).
Dissimilarity (1—Pearson correlation) for all faces, projected into distances in 2D-space by means of multidimensional scaling; a, RSA with (greyscale) pixel values of images; b, RSA with tile weights from the emotion recognition task; note that axes are not labelled, as they do not represent distinct dimensions and only the distances in 2D space are interpretable.

Similar articles

See all similar articles

Cited by 8 PubMed Central articles

See all "Cited by" articles

References

    1. Bruce V, Young A. Understanding face recognition. Br J Psychol. 1986;77(3):305–27. - PubMed
    1. Haxby JV, Hoffman EA, Gobbini MI. The distributed human neural system for face perception. Trends Cogn Sci. 2000;4(6):223–33. - PubMed
    1. Guo K. Holistic gaze strategy to categorize facial expression of varying intensities. PLoS One. 2012;7(8):e42585 doi: 10.1371/journal.pone.0042585 - DOI - PMC - PubMed
    1. Du S, Tao Y, Martinez AM. Compound facial expressions of emotion. Proc Natl Acad Sci. 2014;111(15):E1454–62. doi: 10.1073/pnas.1322355111 - DOI - PMC - PubMed
    1. Ekman P. An argument for basic emotions. Cogn Emot. 1992;6(3–4):169–200.

Grant support

Research was funded by the Deutsche Forschungsgemeinschaft (DFG; www.dfg.de), Cluster of Excellence 277 “Cognitive Interaction Technology”. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Feedback