Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Identification of Emotional Facial Expressions: Effects of Expression, Intensity, and Sex on Eye Gaze

  • Laura Jean Wells ,

    Contributed equally to this work with: Laura Jean Wells, Steven Mark Gillespie

    Affiliation School of Psychology, University of Birmingham, Birmingham, United Kingdom

  • Steven Mark Gillespie ,

    Contributed equally to this work with: Laura Jean Wells, Steven Mark Gillespie

    steven.gillespie@ncl.ac.uk

    Affiliation School of Psychology, University of Birmingham, Birmingham, United Kingdom

  • Pia Rotshtein

    Affiliation School of Psychology, University of Birmingham, Birmingham, United Kingdom

Abstract

The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise), each with three levels of intensity (low, moderate, and normal). Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down.

Introduction

Accurate identification of emotional facial expressions (EFEs) is essential for everyday social interaction. However, the extent to which EFEs are generated for the purpose of social interaction, or are byproducts of the emotional experience, has been subject to some debate [12]. The importance of communicating EFE information is emphasized by results showing that the processing of human EFEs is optimized [34], and that the processing of certain EFEs occurs even when the face is presented outside of conscious awareness [56]. Despite these findings, it has been argued that the processing of emotional faces nonetheless requires top-down control of attention [7].

Attentional allocation for emotional faces may be measured through the use of eye tracking techniques, with a close relationship observed between eye movements and spatial attention [89]. Using these techniques, Eisenbarth and Alpers [10] showed that the recognition of human EFEs is dependent upon information from two main areas of interest (AOI): the eye region and the mouth region. However, the recognition of emotional expressions varies in relation to factors such as (a) the emotional face and (b) the characteristics of the observer. For example, the processing of human EFEs depends on the emotional content of the expression, with differences in accuracy and response times for different expressions previously reported [11]. Furthermore, the relative importance of diagnostic information obtained from the eye and the mouth regions depends on the expressed emotion [10, 12]. In this study, we revisited the topic of recognizing EFEs in an attempt to systematically assess the impact of various factors on accuracy, response times, and attention allocation to different features. Specifically, we focused on four factors that may affect the processing and classification of EFEs: the type of expression, the intensity of the expression, the sex of the face, and the gender of the observer.

Type of Expression

Earlier studies [1317], as well as more recent ones [1820], provide evidence for six basic facial emotional expressions, referred to as anger, disgust, fear, happy, sad, and surprise. These emotions can be successfully differentiated between and are identifiable cross culturally at an above chance level [18, 21]. Furthermore, research has consistently shown that the emotional content, or type of emotion expressed, can affect the accuracy with which EFEs are recognized [11, 22, 23]. It is also argued that humans are biologically “hard-wired” to recognize threat [24], such as that conveyed by a fearful or angry facial expression, and this position is supported by more rapid detection of angry facial expressions compared to happy expressions when situated amongst neutral stimuli [25].

Surprisingly however, this threat recognition advantage does not translate to accuracy in the explicit recognition of EFEs. Rather, it is reported that happiness is the most accurately and rapidly recognized EFE, an effect known as the ‘happy face advantage’ [11, 22, 23], while fear is the least accurately recognized [11]. This suggests that different expressions may vary with respect to their function: while smiles may be aimed primarily at social communication, fearful and other threat related EFEs may represent the byproduct of an emotional experience. Here, threat related EFEs may be used as a cue to indicate potential danger in the environment (e.g., to indicate the presence of predator) and this can be acted upon by observers. Importantly, the communication of danger by threat related EFEs may be facilitated in the absence of explicit awareness or identification [2627].

The eye region and the surrounding area represents the most diagnostic facial feature for accurate EFE identification. The importance of the eye region has been demonstrated using a partial masking method (bubbles) to show that information from the eyes (including the eyebrows) is key for the accurate recognition of all expressions [12]. This effect was primarily evident for recognition by a computer model, but also for human observers (though not for the surprised expression). This work also suggests that different features vary in the way that they contribute to the recognition of the different expressions. Where information from the eyes contributes to fear, anger and sad recognition, information from the mouth contributes to the recognition of happy, surprise and disgust. These findings resonate with earlier results showing that fear, anger and sad are better recognized based on the top half of the face; while happiness, disgust and surprise are better recognized from the bottom half [28].

Results from eye tracking studies also support these observations. These studies have typically focused on the eye region and the mouth region as two main areas of interest (AOI), and have shown that dwell time is typically greater on the eye region across different types of expression [10]. It was also shown the dwell time on the eyes was about ~35% longer than on the mouth for sad, fear, anger and neutral expressions, while for happy it was just 25% longer. Greater attention to the mouth of happy compared with other expressions suggests that the mouth may play a more important role in identifying happy expressions. Conversely, the importance of scanning the eyes for fear recognition has been demonstrated by many studies [2931]. For example, Adolphs et al. [29] found that SM, a patient who showed impaired fearful face recognition, also made fewer spontaneous saccades toward the eye region relative to healthy controls. However, SMs fear recognition recovered when she was instructed to look at the eye region [29].

Intensity of Expression

Everyday expressions are typically displayed with low to mid intensities [32], and as such, expressions of varying intensity may provide more life-like representations [33]. Varying the intensity of emotional expressions can also make emotion recognition tasks more sensitive to subtle differences in the processing of different EFEs [34]. Although there have been fewer studies investigating the effects of expression intensity compared to the type of expression, the consensus is that when the intensity of an EFE increases, the accuracy of identification also increases [3536]. This suggests that individuals are, in general, less accurate at identifying more subtle expressions and more accurate when EFEs are more intense. However, advantages for more intense expressions might reflect that recognition is often measured using forced choice responses where the options do not include neutral [3536]. Thus, it is possible that such methodological designs artificially force participants to attribute an emotion to a face that they would normally perceive as non-expressive.

Intensity has different effects on expressions [37], although these effects do not appear to be consistent. Hoffmann and colleagues [37] examined accuracy for expressions at 50% and 100% intensity, and report that changes in intensity had no effect on emotion recognition for fear and surprise expressions [37]. However, using a different sample of participants the same study reported that changing expression intensity had particular effects for expressions of anger, fear, and sadness. A different study suggests that recognition of happy, and to a lesser degree sad and disgust expressions, follows a sigmoidal shape in which performances asymptote after 60% intensity [35]. Thus, despite some inconsistencies in the effects reported, the impact of intensity on EFE identification may differ according to the emotional content of the expression.

If the primary function of EFEs is aimed at social communication, then a similar pattern of results would be expected for response times and dwell time as has been observed for recognition accuracy. That is, the more ambivalent the expression then the slower the response time, and the more participants will scan the face for additional information. Indeed, Guo [38] showed an inverse relationship between fixation count and expression intensity (20%-100%), with more fixations on lower intensity expressions. However, the effect reached an asymptote after 60% intensities. The increase was observed for both the eyes and the mouth, and so the relative contribution of each feature to emotion identification was unaffected by expression intensity.

Sex of the face displaying the expression

As well as expression type and intensity, the sex of the face can also affect the identification of EFEs. In general, one of the most common beliefs across cultures with regards to gender and emotion is that women are more “emotional”, with women being expected to experience and express emotions more than men [39]. In line with this, studies have shown that women are typically more facially expressive than men [40], and that females’ non-verbal cues are more accurately judged [41]. The expectation therefore may be that all expressions are judged with more accuracy from women’s than from men’s faces. However, a range of research has demonstrated that the effects of sex may vary with the type of expression [35, 42].

Two complementary theories have been proposed to describe the relation between face gender and expression. The stereotype theory of emotion recognition suggests that a division exists between masculine emotions and feminine emotions [40, 43]. Specifically, anger and disgust are culturally viewed as more masculine and are associated with power; while happiness, sadness, and fear are culturally classed as more feminine and are less associated with power [39, 44]. Theoretically, if expressions are primarily aimed at social communication, it is expected that such stereotypical beliefs will affect recognition accuracy. A ‘Structural Similarities’ explanation suggests that the link between sex and emotions is not culturally driven but is based on the morphology of emotional facial expressions. Thus, sex related differences in face shape are associated with differences in expressive features. Zebrowitz and colleagues [45] support this idea by demonstrating gender specific objective similarities between the appearances of certain emotional expressions using a connectionist modelling approach. They found that neutral male facial expressions showed greater similarity to angry expressions than did female faces, while neutral female faces showed greater similarity to surprise faces [45].

An advantage for recognizing happy expressions from female faces has been repeatedly reported [35, 42, 46]. It has also been shown, albeit with less consistency, that disgust [35] and anger [46] are recognized better from male faces. Nonetheless, not all evidence is consistent with the stereotype or structural similarities theories. For example, Hess et. al. [35] showed that sadness was better recognized from male faces, while Tucker and Friedman [47] found that angry female faces were more accurately judged than sad female faces.

Gender of the observer

The gender of the person identifying the emotion is a further variable of interest that may affect eye scan paths and recognition of EFEs. Like the belief that women are more emotionally expressive, it is also assumed that women are superior to men at recognizing facial expressions of emotion [4849]. The primary caretaker theory [50] attempts to explain this notion using evolutionary theories attributing human expression recognition superiority to females’ role in caring for offspring. Specifically, a mother who is more attuned to the emotions of her infant is more likely to promote a secure attachment, which in turn may lay the foundations for healthy development and functioning [48]. Similarly, it is also hypothesized that woman have higher empathizing capacity [51], which again may provide advantages when attempting to read the expressions of others [52].

Currently available evidence regarding female superiority in judging facial expressions is mixed. Montagne and colleagues [49] demonstrated an overall female superiority in a task measuring the processing of emotional faces. However, a meta-analysis revealed that out of 55 studies, only 11 showed a reliable female advantage in EFE recognition abilities [53]. It has been argued that female superiority might only be revealed when the amount of visual information is limited, either by manipulating the exposure duration [48] Hampson et. al., 2006), or the intensity of the expressions [37, 49, 54].

However, others have either found a limited effect of the sex of the observer on EFE recognition [5556], even under limited exposure durations [57], or did not report an interaction of observer sex with expression intensity [52]. When considering different outcome measures used, the female superiority effect appears to be more reliably associated with differences in response time than with differences in accuracy [48, 56, 58].

The fitness to threat hypothesis predicts that a female superiority effect exists only for negative EFEs, including fear, disgust, sadness, and anger. This is due to the likelihood that negative emotions signal a potential threat to the infant [48]. However, again the evidence in support of this theory is mixed and inconsistent. Hampson et. al. [48] found evidence for a female superiority effect in response times for the recognition of negative emotions in particular. In contrast, others have found that men outperform women when identifying anger, but only when judging the emotion from other male faces [59].

In relation to eye scan paths for EFEs, it is suggested that although both male and female participants show a preference for the eye region, females typically attend more to the eye region compared with male participants, while male participants show greater attention to the mouth than do females [58].

The Present Study

In the current study we revisited the question related to different factors affecting the classification of EFEs. We specifically focused on four factors: expression type, expression intensity, face sex, and observer sex. We tested both men and women participants, measuring accuracy and reaction time (RT) for the identification of emotional expressions varying in expression, intensity and sex. We also recorded dwell-time (i.e. the total duration of eye gaze) on the two key areas of interest (AOIs): the eyes and the mouth. We note that our AOIs were relatively large, with the mouth including the philtrum or Cupid’s bow area (bottom of nose) and the eyes included the eyebrows and the naison point (top of the nose). It has been suggested that the two latter regions are crucial for recognizing disgust and anger, respectively. We asked participants to identify EFEs from both male and female faces, showing expressions of the six core emotions, at varying levels of intensity. We manipulated intensity using morphs from neutral to a full-blown expression, and expressions were presented at 10%, 55% and 90% intensity. Participants made forced choice responses from seven options: angry, disgust, fear, happy, sad, surprise, and neutral. Although none of the faces presented a fully neutral expression, this option was included to examine the extent to which low intensity expressions are perceived as neutral or are correctly judged to show emotional content. The inclusion of a neutral option represents one attempt to eliminate methodological limitations surrounding the use of forced choice designs that do not allow the participants to label expressions as showing no emotional content.

The investigation of a variety of factors in this study (expression type, expression intensity, face sex, and observer sex) allows for a more in depth examination of those factors that contribute to EFE recognition and how these may interact with one another. Previous studies have generally opted to investigate a minimal number of variables that affect EFE recognition at any one time. This may account for some discrepancies in the literature, including contradictory findings around the impact of observer gender [49, 53]. Some have found a general female superiority effect, while others have found female superiority for only more subtle expressions [37, 49, 54]. Further, the use of multiple outcomes, including accuracy, RT, and dwell time allows for a more comprehensive understanding of the impact of these differing factors.

Hypotheses

Expression type: we predicted that happy faces would be identified the quickest and most accurately consistent with robust evidence for a ‘happy face advantage’, while fearful expressions would be least accurately identified. We predicted that in general dwell time would be greatest on the eyes compared to the mouth. However, we predicted that dwell time would be relatively higher on the mouth of happy expressions, and on the eyes of fearful expressions.

Expression Intensity: We expected an increase in accuracy, and a reduction in RT and dwell-time, with increasing intensity. We expected that these differences would asymptote earlier for happy faces, with smaller differences in accuracy, RT, and dwell time between 55% and 90% expressive happy faces compared with other expression types.

The sex of the face: we predicted that happy expressions would be recognized best from female faces. However, evidence is mixed with respect to the effects of sex on recognizing other expressions. Theoretically (see above), it was hypothesized that sad and fear would be better recognized from female faces, while anger and disgust would be better recognized from male faces. Previous studies did not suggest different scanning patterns for male and female faces [52, 54].

The observer gender: We predicted that relative to male participants, female participants would respond faster in all conditions. We also predicted that there would be a female superiority effect in accuracy for low intensity expressions in particular. Finally, we also predicted that dwell times on the eyes would be longer among female compared with male participants.

Method

Participants

We recruited 39 participants (20 female; 19 male) from the undergraduate student population of a UK based University. Participants ranged in age from 18 to 27 years (M = 20.36, SD = 1.91). The ethnicities of the participants were white Caucasian (n = 31) and Asian (n = 5). Some participants reported their ethnicity as ‘mixed’ (n = 2) and one participant chose not to report this information. All participants grew up in the UK. Participants were recruited through either the University of Birmingham research participation scheme, in return for course credit, or through volunteering in return for payment of £6.00. The University of Birmingham Committee for Ethical Review for Science, Technology, Engineering, and Mathematics granted ethical permission for this study. Each participant provided his or her written informed consent before the study began.

Materials

The EFE stimuli were chosen from the NimStim Face Stimulus Set ([60]; http://www.macbrain.org/resources.htm). Ten Caucasian models were chosen (5 female, 5 male), each demonstrating seven expressions: the six universal EFEs (happy, sad, angry, afraid, surprise, disgust) and a neutral expression. To ensure the expressions would be recognized reliably, the selection of faces was based on the NimStim norms data for recognizing expressions. The stimuli chosen included faces with open mouths for some of the expressions. The choice of whether to include an open or a closed mouth expression for each model was based on the NimStim validity data, with the most reliably recognized alternative being selected. Of the ten individual models, the number of open mouthed stimuli selected for each expression were as follows: eight angry, nine disgust, nine fear, ten happy, two sad, ten surprise, and seven neutral.

To obtain different intensities of EFEs, each expression was morphed from the neutral face to 100% expressive using the STOIK Morph Man morphing software (http://www.stoik.com/products/video/STOIK-MorphMan/). Three different intensities were selected for each EFE for each model: normal intensity (90%), moderate intensity (55%), and mild intensity (10%). This gives 18 expressions per model, with 180 faces in total. For example stimuli, see Gillespie et. al. [61]. We used an EyeLink 1000 head mounted eye tracking system (SR Research Ltd.) to record eye gaze and dwell time. Although viewing was binocular, only movements of the participant’s left eye were recorded. Gaze location was sampled once every millisecond.

Procedure

A full factorial design was used with the following within factors: type of expression (anger, disgust, fear, happy, sad, surprise); intensity (10%, 55%, 90%); face sex (female, male); and observer gender (woman, man) as a between subject factor. A total of 360 trials were presented, with 10 trials per condition. We presented each stimulus (specific face with a specific expression and intensity) only twice, in two separate sessions, to minimize familiarization effects. The order of trials within each session was randomized. Each trial started with a fixation point presented for 500ms followed by the presentation of the face. Participants were asked to press the number on the keyboard which corresponds to the answer they think is correct for each face. The seven possible answers (the six emotions and neutral) were listed vertically on the left side of the screen with a corresponding number for each emotion: 0 = neutral, 1 = angry, 2 = disgust, 3 = fear, 4 = happy, 5 = sad, 6 = surprise. Participants were asked to respond as fast and as accurately as possible. There was no time limit on each trial and the next trial would only begin after a response had been made.

At the beginning of the experiment, a calibration and validation procedure was completed using 9 points, one at fixation, and the rest at the edge of the screen. Eye tracking was recalibrated after every 20 trials. Another 9 points calibration occurred after 120 trials.

Analysis

The analysis focused on three parameters: Accuracy, RT, and Dwell-time. An accurate response was defined as selecting the correct EFE for each trial. ‘Neutral’ responses were counted as inaccurate. The number correct for each emotion, at each level of intensity, for male faces and females faces, varied between 0 and 10. This is based on having five male and five female models showing each expression at each level of intensity, with each unique stimulus presented twice. RT was the time taken to make a response independent of whether that response was correct or incorrect. We measured dwell-time on two predetermined AOIs: the eyes and the mouth. The eye region comprised of a 289x100 pixel rectangle which included both the eyes, the eyebrows and the area in between; the mouth region was a 208x139 pixel rectangle which included the mouth and its surroundings. We measured absolute dwell-time for each AOI, that is, the total amount of dwell-time across all fixations within each AOI.

The data were first analyzed manually. Any trials that had 50% or more of the eye-tracking data outside the face area were deleted. These trials most likely reflect drift in the measurement of eye movements, rather than an accurate representation of a participant’s eye-gaze. The data were then analyzed using a series of ANOVAs and paired t-tests. We applied a Bonferroni correction to all t-tests. Data for accuracy, RT, and dwell time were first analyzed including the participant’s gender as a between subject factor.

Where we failed to observe predicted effects, we followed this up by computing a Bayes factor (http://www.lifesci.sussex.ac.uk/home/Zoltan_Dienes/inference/Bayes.htm) to assess the strength of the evidence supporting the null hypothesis [62].

Results

Accuracy and RT

For analyses of accuracy and RT we used a 2 (face sex: male; female) x3 (intensity: 10%; 55%; 90%) x6 (expression: happy; sad; angry; fear; surprise; disgust) within-subjects ANOVA, with participant’s gender as a between subject factor.

Effects of participant gender

The main effect of gender of participant showed a significant overall difference in RT F(1, 37) = 6.98, p < .05, pɳ2 = .16, with females overall responding faster than males (see Table 1). Although there was no significant effect of participant gender on accuracy F(1, 37) = .06, p > .05, pɳ2 = .001, a Bayes factor was calculated for this effect given that a clear prediction was made that females would be more accurate than males. This revealed a Bayes factor of 0.19, suggesting that there was evidence in support of the null hypothesis that there was no difference in accuracy between male and female participants.

thumbnail
Table 1. Percent correct across intensity and emotion expressed for male and female participants categorizing male and female faces.

https://doi.org/10.1371/journal.pone.0168307.t001

However, the effect of participant gender on accuracy should be interpreted in light of a significant interaction of participant gender with the sex of the facial stimulus F(1,37) = 11.06, p < .01, pɳ2 = .20 (Table 1). The interaction for accuracy showed that both women and men were more accurate in identifying the expressions of females than males, although the difference was larger for females. The interaction of participant gender with the sex of the face was non-significant for RT F(1, 37) = 1.45, p > .05, pɳ2 = .04.

Although we predicted in particular that there would be a female superiority effect for expressions at lower intensities, the interaction of participant gender with intensity was non-significant for both accuracy F(2, 74) = 1.55, p > .05, pɳ2 = .04, and RT F(2, 74) = 1.92, p > .05, pɳ2 = .05. There were no other effects involving the gender of the participant for either accuracy or RT (p > .1).

As the gender of the participant did not interact with any of the stimulus related factors, we collapsed across the responses of male and female participants for all subsequent analyses. The results are presented in Fig 1.

thumbnail
Fig 1.

Accuracy of emotion recognition for female (A) and male (B) faces, and response times for classifying female (C) and male (D) faces, by expression, and intensity.

https://doi.org/10.1371/journal.pone.0168307.g001

Effects of Expression Type

We observed a main effect of expression type for both accuracy, F(5, 190) = 40.69, p < .001, pɳ2 = .52, and RT F(5, 190) = 26.01, p < .001, pɳ2 = .41. For accuracy, Bonferroni corrected pairwise comparisons showed that fear was the least accurately recognized expression, while happy was judged the most accurately, compared with all other expressions (p < .05). Similarly, comparisons for RT showed that fear was recognized the slowest compared with all other expressions, while happy was recognized more quickly than disgust, fear, sad, and surprise (p < .05). However, the effects of expression type should be interpreted in light of a significant interaction of expression, intensity, and sex of the face, for both accuracy and RT, described below.

Effects of expression intensity

We also found a significant effect of expression intensity on accuracy F(2, 76) = 4961.35, p < .001, pɳ2 = .99, with higher intensity expressions (10% < 55%, 55% < 90%) associated with a greater degree of accuracy across all emotions (all comparisons p < .001). In addition, we found that most (around 80%) of the 10% expressions were categorized as neutral across all expressions (~1.7% std across participants) (see Fig 2). This made for generally low levels of accuracy when judging the lowest intensity (10%) expressions, as neutral responses were classified as incorrect. Hence, overall accuracy appears low in Table 1 given the low number of correctly categorized expressions at 10% intensity.

thumbnail
Fig 2.

Confusion matrixes showing the percentage of participants’ responses to all six emotions for 10% (A), 55% (B) and 90% (C) intensities.

https://doi.org/10.1371/journal.pone.0168307.g002

There was also a significant effect of intensity on RT F(2, 76) = 30.86, p < .001, pɳ2 = .45. However, comparisons showed that the effects were not linear. RTs were slowest for 55% expressions compared with 10% and 90% (p < .001), while 90% expressions were judged slower than 10% (p < .01). However, the effects of expression intensity should be interpreted in light of a significant interaction of expression, intensity, and sex of the face, for both accuracy and RT, described below.

Effects of face sex

We also found that although female expressions were judged more accurately than male expressions F(1, 38) = 66.33, p < .001, pɳ2 = .64, male expressions were judged more quickly F(1, 38) = 23.44, p < .001, pɳ2 = .38. However, the observed effects for the sex of the face should be interpreted in light of the below interactions.

Interactions of expression, intensity and face sex

We found a significant three-way interaction of expression, intensity, and sex of the face for accuracy F(10, 380) = 9.78, p < .001, pɳ2 = >.05. To unpack the interaction we computed separate 2 (sex) x 3 (intensity) ANOVAs for each expression. To account for multiple comparisons we applied a Bonferroni correction, with results interpreted as significant at an alpha level of p < .008 (i.e. .05/6). Using this stringent criterion, an interaction of face sex with emotion intensity was observed for disgust and happy expressions (see Table 2). The interaction for fearful and sad expressions did not survive the correction, while interactions for anger and surprise failed to reach significance.

thumbnail
Table 2. Simple effects for interaction of face sex with emotion intensity for accuracy and RT for expressions of anger, disgust, fear, happy, sad, surprise.

https://doi.org/10.1371/journal.pone.0168307.t002

A follow up break down of the disgust and happy expressions for each intensity showed that the sex of the face had no effect for disgust expressions at lower intensity (10%), or for happy expressions at full intensity (90%). Furthermore, an opposite pattern was observed in identifying these two expressions, with participants showing greater accuracy recognizing disgust from female faces, and happy from male expressions. Both of these effects were most pronounced at the 55% intensity (see Table 2). However, the structural similarities and stereotype theories predict better recognition of happy from female faces, and disgust from male faces. We therefore computed Bayes factors to test the strength of the evidence for the null hypothesis. Based on the effects observed by [42], we found a Bayes factor of 0.03 for the effect of face gender on accuracy for both disgust, and happy expressions. As such, when considering the effect size in the expected direction, the current data show support for the null hypothesis.

A three way interaction was also observed for RT F(10, 380) = 2.51, p < .01, pɳ2 = .06. Applying a similar approach to the above, Bonferroni corrected interactions of intensity and sex were observed for happy expressions only. Bonferroni corrected paired samples t-tests showed that the sex of the face had no significant effects on RT for expressions at 55% or 90% intensity; however, female happy expressions took longer to identify than male ones at low intensity. See Table 2 for details.

In summary, the pattern of results for accuracy and RT were similar with respect to the expression manipulation, with fearful expressions being the most difficult to recognize and happy being the easiest. However, the sex of the face and the intensity of the expression had different effects on accuracy and RT. Specifically, we showed that accuracy was higher for female faces (with the exception of happy) but responses were also slower. The extent of this interaction depended on the expression and intensity level, and was most pronounced for disgust and happy expressions. Furthermore, while accuracy results were linearly related to the intensity manipulation, RT results showed that response times were slower for moderate and likely more ambivalent intensity expressions.

Dwell time

Effects of participant gender.

The gender of the participant did not affect the gaze pattern or interact with any of the factors for analyses of dwell time. For simplicity in reporting the results we therefore removed participant gender as a between subject factor.

Effects of area of interest.

The results collapsed across the gender of the participant are presented in Fig 3. We used a 2 (AOI: eyes; mouth) x2 (face sex: male; female) x3 (intensity: 10%; 55%; 90%) x6 (expression type: happy; sad; angry; fear; surprise; disgust) within-subjects ANOVA for the analysis of dwell time on the eyes and the mouth. We showed that there was a main effect of AOI F(1, 38) = 100.69, p < .001, pɳ2 = .73, with longer dwell times on the eyes (M = 844.49, SE = 50.32) compared with the mouth (M = 245.53, SE = 22.97).

thumbnail
Fig 3.

Dwell time on the eyes (A) and the mouth (B) of emotional facial expressions by expression, and intensity.

https://doi.org/10.1371/journal.pone.0168307.g003

Interaction of area if interest with intensity and expression.

We also observed a significant interaction of intensity and expression with AOI F(10, 380) = 2.36, p < .01, pɳ2 = .06. To better understand this interaction, we examined dwell times separately for the eye region and the mouth region.

An ANOVA of sex, intensity, and expression for dwell time on the eyes showed a significant interaction of expression and intensity F(10, 380) = 9.03, p < .001, pɳ2 = .19. When further broken down by intensity we observed significant effects of expression at 55% F(5, 190) = 9.02, p < .001, pɳ2 = .19, and 90% intensity F(5, 190) = 25.53, p < .001, pɳ2 = .40. The effect of expression for faces at 10% intensity was non-significant F(5, 190) = 1.12, p > .05, pɳ2 = .03.

Table 3 shows the results of all pairwise comparisons for dwell time on the eyes of 55% and 90% expressions. At 55% intensity, Bonferroni corrected pairwise comparisons (p < .003) showed that dwell time on the eyes was greatest for fearful compared to all other expressions except sadness. Dwell time on the eyes of happy expressions was also significantly lower than that for sad and surprised expressions (p < .003). At 90% intensity, we again showed that dwell time on the eyes was greater for fearful compared to all other expressions. Furthermore, we also showed that dwell time was shortest on happy eyes compared to all other expressions.

thumbnail
Table 3. Paired sample t-tests comparing dwell time on the eye region of emotional expressions at 55% and 90% intensity.

https://doi.org/10.1371/journal.pone.0168307.t003

The interaction of intensity and expression for dwell time on the mouth was also found to be significant F(10, 380) = 3.20, p < .001, pɳ2 = .08. When broken down by intensity, we showed that there was a significant effect of expression for faces at 55% F(5, 190) = 6.31, p < .001, pɳ2 = .14, and at 90% F(5, 190) = 4.60, p < .001, pɳ2 = .11, intensity. The effect of expression for faces at 10% was non-significant F(5, 190) = 1.07, p > .05, pɳ2 = .03. Fig 3B shows dwell time on the mouth of emotional expressions as a function of the emotion expressed and the intensity of the expression.

Table 4 shows the results of all pairwise comparisons for dwell time on the mouth of 55% and 90% expressions. For faces at 55% intensity, Bonferroni corrected comparisons showed longer dwell time on the mouth of disgusted compared with happy and surprised expressions. Dwell time was also shorter on the mouth of surprised expressions compared with fearful and sad expressions. At 90% intensity, we showed that dwell time was shorter on the mouth of surprised compared with angry and fearful expressions.

thumbnail
Table 4. Paired samples t-tests comparing dwell time on the mouth region of different emotional expressions at 55% and 90% emotional intensity.

https://doi.org/10.1371/journal.pone.0168307.t004

Discussion

Emotional expression recognition represents a crucial part of successful social interaction, allowing one to communicate valence specific information to an observer, and allowing others to infer the emotional state of the expresser. Emotionally salient aspects of the face, namely the eye region and the mouth region, provide diagnostic information for emotion classification [10]. However, accuracy of expression recognition, and attention to the eyes and the mouth, may vary with particular characteristics of the observed expression. Furthermore, there is debate in the literature as to the precise role of EFE information. More specifically, it is debated whether EFEs serve a social interaction function, or are byproducts of the emotional experience. The earlier theory might suggest that different outcome measures should show a similar pattern, with increasing accuracy associated with quicker RTs and shorter dwell times. The aim of this study was to investigate the effects of four factors on emotional expression recognition: expression type, expression intensity, the gender of the face displaying the expression, and the gender of the observer. We start by summarizing the results in relation to our specific hypotheses.

In line with our predictions, happy was the fastest and most accurately recognized expression, and fear was recognized the slowest and least accurately. These findings were in line with our predictions, and are consistent with a ‘happy face advantage’ [12, 22], rather than with a proposed evolutionarily based advantage for judging negative emotions [25]. We also found, consistent with earlier findings [10], that dwell times were longest on the eyes compared to the mouth, and that this increase was largest for fearful expressions. These findings might support the notion that information from the eye region is more salient in the process of recognizing fear compared to other emotions [12, 29], and implies consistency with the suggestion that the widening of the eye whites may represent the critical diagnostic feature for fear [63].

The pattern of results differed for dwell times on the mouth, with longer dwell times for disgust expressions, and shorter dwell times for surprise. Thus, while the eyes may be of relatively reduced importance for recognizing happy expressions, we did not observe relatively increased attention to the mouth. However, when looking only at the earliest fixations on the face, it has been found that more fixations were devoted to the upper lip region of the face for happy/joy expressions compared to the mean [64]. The finding of reduced attention to the eye region for happy faces may reflect the unique shape of the mouth in these expressions, with diagnostic information from the eyes being of relatively less importance [64]. Surprisingly however, when information from the eye or the mouth region is masked, a computer algorithm can still identify happy expressions at above 90% classification accuracy. As such, information from either region should therefore be sufficient for making accurate judgments [65].

It is debated whether the primary role of EFE information is aimed at social interaction, or is a byproduct of emotional experience. The consistent finding of differences in the ease with which different expressions are recognized supports the idea that different expressions serve different primary functions. Smith and Schyns [66] present evidence in favor of differing functions, and show that different EFEs are recognized with varying success over different distances. These authors note that “catastrophic” transformations occur in happy and surprised faces, whereby the mouth opens revealing the teeth. Furthermore, they show that these catastrophic changes are communicated with greater sensitivity over a range of distances, consistent with an explicit function for social interaction for happy and surprised faces. Thus, an explicitly recognizable smiling face might communicate positive emotion and signal that the individual is willing to engage in reciprocal altruism [67].

Conversely, it was found that fear and anger were poorly recognized across a range of viewing distances [66]. As commented by the authors, this finding is surprising for signals communicating potential threat or danger, with the expectation being that such signals should be easily recognizable across a range of distances. Although fear expressions may not serve an explicit social interactional function, these expressions nonetheless serve to communicate a source of threat in the environment. Importantly, this can happen rapidly and in the absence of explicit identification [2627]. Furthermore, Frith [2] notes that even in the absence of explicit recognition, mimicking the features of a fearful face, that is, widened eyes and dilated nostrils, may also serve to increase vigilance, widening the field of vision and increasing inhalation and sense of smell [68]. Thus, different expressions may diverge in the extent to which their primary function is one of social interaction, or that they reflect a by-product of the emotional experience.

We expected an increase in accuracy and a reduction in RT and dwell time with increasing levels of intensity. In line with our hypothesis, and consistent with the findings of others [3536], there was an inverse relationship of accuracy with intensity. However, the relationships for RT and dwell time did not follow the expected pattern. Rather, we observed longer RTs for medium intensity, and likely more ambivalent, emotional expressions. That low intensity expressions were categorized the fastest likely reflects that these expressions may have consistently been judged to be neutral. In support of this, the neutral option in the current study was selected in response to 84% of trials displaying a facial stimulus of low (10%) expressive intensity. Thus, participants may have been relatively insensitive to the low levels of emotional content. This pattern of results also suggests that participants found the classification of moderate intensity expressions the most difficult. The findings for modified intensity expressions may most closely resemble the processing of EFEs outside of the lab, with these expressions argued to provide more life-like representations of each expression [33], and to be most sensitive to subtle differences in the processing of EFEs [34]. The effect of intensity on eye scan paths was dependent on the type of expression, and will be discussed in more detail below.

Based on the stereotype and structural similarities theories, it was also predicted that different expression types would be recognized with more or less ease dependent on the sex of the face showing the expression. However, we failed to find support for these theoretically driven predictions. In fact, for the expression stimuli used in the current experiment, we observed the opposite pattern: happy expressions were more accurately identified from male faces, while disgust was more accurately identified from female faces. The calculation of Bayes factors based on previous effect sizes however suggests that the data may show evidence that is most consistent with the null hypothesis. We also observed a speed/accuracy trade-off in the recognition of male and female EFEs, with female faces being recognized with more accuracy, and male expressions recognized with more speed. In line with previous findings, we did not observe any significant differences in the pattern of eye scan paths for male and female faces. Finally, in contrast to earlier findings [58], we did not find any evidence for differences in the way women and men recognize and scan facial expressions.

Analyses of accuracy and RT revealed an influence of the sex of the face and the sex of the observer, with female faces recognized more accurately by both sexes, although this female face advantage was larger for female participants. Conversely, although not significant, response times for female expressions were faster than for male expressions. Male participants also appeared to show a relatively greater difference in RT, being more than 100ms slower to identify female compared with male expressions. However, these results only partially support the attachment promotion theory which suggests that females are more adept at EFE identification in general [4849].

Although female participants showed some degree of superiority in correctly classifying female compared with male faces, the only evidence for a more generalized pattern of female superiority was the finding of overall faster RTs. We also found no support for the fitness to threat theory of female superiority in identifying specific threat related emotions [4849]. Rather, the present findings suggest that perhaps any differences in emotion recognition abilities between male and female participants lie in the gender of the face that they are observing. Although these findings show some support for sex differences in the processing of male and female EFE information, similarities in eye tracking parameters are not consistent with broader, more general differences in the cognitive systems underlying EFE recognition in male and female participants. However, the exploration of gender-based differences in this paper was based on relatively small sample sizes, and these should be considered when interpreting the observed effects. Although the absence of some predicted effects may reflect low statistical power, where predicted effects were not observed, or were observed in the opposite direction, the calculation of Bayes factors (based on previous effect sizes) suggested that the current data typically showed support for the null hypothesis.

For expressions at 55% intensity, dwell times were longer on the mouth for disgusted compared with happy and surprised expressions. Consistent with this finding, it has been shown that the mouth region may reveal information for expressions of disgust that can be used with high optimality [64]. The finding of relatively increased attention to the mouth of disgust expressions at lower intensities is similar to earlier findings [64], and supports the conclusion that when judging more ambiguous or lower intensity expressions, greater attention is allocated to those regions that contain emotion specific diagnostic information [64]. However, this finding may also reflect methodological issues around the selected stimuli. The process for creating moderate intensity expressions involved morphing faces expressing emotional content with neutral expressions. As a result of this process, some faces may appear obscure, and this is particularly true for open mouthed expressions of disgust. Here, when morphed with neutral the tongue can appear translucent and may attract the focus of attention. These issues call in to question the extent to which modified intensity expressions truly resemble real world expressions to the greatest degree [33].

While the morphing process for creating modified intensity expressions is subject to certain limitations, such as those described above, this remains the most common way to create mixed intensity emotionally expressive face stimuli. A set of more naturalistic expressions rated for emotional intensity would help to overcome some of these difficulties and would be more ecologically valid. Alternatively, the use of dynamic faces showing increasing emotional intensity would better reflect task demands in the real world where expressions are seldom still. A further methodological consideration involves the predefined placement of AOIs across all faces. Although the eye and the mouth AOIs were consistent in terms of their size and shape, different facial proportions mean that there was some degree of variation in the contents of the AOIs for different faces. For example, for some faces the mouth AOI included the philtrum or Cupid’s bow, but this was absent for other faces. The predetermined placement of AOIs however limits the inherent subjectivity of manually placing AOIs for each expression.

A final issue to consider is the inclusion of a ‘neutral’ option that participants could select if the face appeared to show little or no emotional content. Even for very low intensity expressions, neutral responses were recorded as incorrect, despite being 90% neutral and only 10% expressive. However, this design allowed us to explore whether or not participants were sensitive to very low levels of emotional content, and the effects of lowered intensity on eye scan paths. Furthermore, including the neutral option also made the task more representative of task demands during real world social interactions, where faces expressing little emotional content are perhaps more likely to be dismissed as neutral.

Conclusion

Here we show that during free viewing of EFE stimuli, accuracy rates, RTs and eye scan paths can vary with the type and degree of emotional content on show. In particular, we found that fearful and happy expressions produce the most pronounced effects, with fearful expressions recognized with the least speed and accuracy, while happy expressions were recognized with the greatest speed and accuracy. The identification of fearful and happy expressions may therefore be supported by different underlying mechanisms for emotion recognition, and this conclusion is supported by the observation of differential eye scan paths for these expressions. Although dwell time is typically greater on the eyes compared to the mouth across all expressions, this effect was particularly pronounced for fearful expressions, and was least pronounced for happy faces. The observed effects in relation to the sex of the face were generally complex, and were dependent upon both the intensity, and the emotional content, of the expression. We would suggest that future studies should consider manipulating and examining the sex of the expressive face, as well as the effects of intensity and emotion. In contrast, observer gender did not interact with any of the factors, and no differences in eye scan paths were observed between male and female participants. These findings fail to support theories of a general female superiority effect, or sex specific processing of emotional faces.

The results reported here provide a detailed account of emotion recognition in a neurotypical sample and shows that various parameters, including accuracy, RT, and dwell time on the eyes and the mouth are sensitive to differences in the type of expressions, the intensity of the expression, and the gender of the face displaying the expression. The extent to which these variables affect similar parameters in populations that are characterized by impairments in emotion recognition might help to elucidate the underlying mechanisms for these problems. For example, individuals with psychopathic tendencies [6973], and patients with autism [74], and schizophrenia [75], show impaired EFE recognition abilities, and these impairments may reflect abnormalities in the allocation of attention for affective faces [61, 7678]. Analyses of eye scan paths may also help to elucidate differences in the ways that these disorders manifest in male and female patients. Similarly, reduced attention to the eye region with increasing age may also explain relatively impaired emotion recognition abilities among the elderly [79].

Author Contributions

  1. Conceptualization: LW SMG PR.
  2. Formal analysis: LW SMG PR.
  3. Funding acquisition: PR.
  4. Investigation: LW.
  5. Methodology: LW SMG PR.
  6. Supervision: SMG PR.
  7. Visualization: LW SMG PR.
  8. Writing – original draft: LW SMG PR.
  9. Writing – review & editing: LW SMG PR.

References

  1. 1. Parkinson B. Do facial movements express emotions or communicative motives? Pers Soc Psychol Rev. 2005; 9(4): 278–311. pmid:16223353
  2. 2. Frith C. Role of facial expressions in social interactions. Phil Trans. R Soc. 2009; 364: 3453–3458.
  3. 3. Öhman A, Esteves F, Soares JJ. Preparedness and preattentive associative learning: Electrodermal conditioning to masked stimuli. J Psychophysiol. 1995; 9: 99–108.
  4. 4. Vuilleumier P, Armony JL, Driver J, Dolan RJ. Effects of attention and emotion on face processing in the human brain: An event-related fMRI study. Neuron. 2001; 30: 829–841. pmid:11430815
  5. 5. Morris JS, Öhman A, Dolan RJ. Conscious and unconscious emotional learning in the human amygdala. Nature. 1998; 393: 467–470. pmid:9624001
  6. 6. Whalen PJ, Rauch SL, Etcoff NL, McInerney SC, Lee MB, Jenike, MA. Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge. J Neurosci. 1998; 18: 411–418. pmid:9412517
  7. 7. Pessoa L, McKenna M, Gutierrez E, Ungerleider LG. Neural processing of emotional faces requires attention. P Natl Acad Sci USA. 2002; 99: 11458–11463.
  8. 8. Engbert R, Kliegl R. Microsaccades uncover the orientation of covert attention. Vision Res. 2003; 43: 1035–1045. pmid:12676246
  9. 9. Smith DT, Rorden C, Jackson SR. Exogenous orienting of attention depends upon the ability to execute eye movements. Curr Biol. 2004; 14: 792–795. pmid:15120071
  10. 10. Eisenbarth H, Alpers GW. Happy mouth and sad eyes: Scanning emotional facial expressions. Emotion. 2011; 11: 860–865. pmid:21859204
  11. 11. Kirouac G, Dore F. Accuracy and latency of judgement of facial expressions of emotions. Percept Motor Skill. 1983; 57: 683–686.
  12. 12. Smith M, Cottrell G, Gosselin F, Schyns P. Transmitting and decoding facial expressions. Psychol Sci. 2005; 16: 184–189. pmid:15733197
  13. 13. Ekman P. Universal and cultural differences in facial expression of emotion. In: Cole JR, editor. Nebraska symposium on motivation. Lincoln: University of Nebraska Press; 1971. Vol. 19. pp. 207–283.
  14. 14. Ekman P. Facial expressions of emotion: New findings, new questions. Psychol Sci. 1992; 3: 34–38.
  15. 15. Ekman P. Are there basic emotions? Psychol Rev. 1992; 99: 550–553. pmid:1344638
  16. 16. Ekman P. Facial expression and emotion. Am Psychol. 1993; 48: 384–392. pmid:8512154
  17. 17. Ekman P, Friesen WV. Constants across cultures in the face and emotion. J Pers Soc Psychol. 1971; 17: 124–129. pmid:5542557
  18. 18. Elfenbein A, Ambady N. On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychol Bull. 2002; 128: 203–235. pmid:11931516
  19. 19. Matsumoto D, Kupperbusch C. Idiocentric and allocentric differences in emotional expression, experience, and the coherence between expression and experience. Asian J Soc Psychol. 2001; 4: 113–131.
  20. 20. Ekman P, Friesen WV. Unmasking the face: A guide to recognizing emotions from facial clues. Los Altos, CA: Ishk; 2003.
  21. 21. Ekman P. Universal facial expressions of emotion. In: LeVine R. Culture and personality: Contemporary readings. New Jersey: Transaction Publishers; 1974. pp. 8–14.
  22. 22. Kirita T, Endo M. Happy face advantage in recognizing facial expressions. Acta Psychol. 1995; 89: 149–163
  23. 23. Leppänen J, Tenhunen M, Hietanen J. Faster choice-reaction times to positive than to negative facial expressions: The role of cognitive and motor processes. J Psychophysiol. 2003; 17: 113–123.
  24. 24. Öhman A. Fear and anxiety as emotional phenomenon: Clinical phenomenology, evolutionary perspectives, and information-processing mechanisms. In: Lewis M.; Haviland JM., editors. Handbook of emotions. New York: Guildford Press; 1993: 511–536.
  25. 25. Fox E, Lester V, Russo R, Bowles R, Pichler A, Dutton K. Facial expressions of emotion: Are angry faces detected more efficiently? Cog Emot. 2000: 14(1); 61–92.
  26. 26. Öhman A, Mineka S. Fears, phobias, and preparedness: toward an evolved module of fear and dear learning. Psychol Rev. 2001; 108(3): 483–522. pmid:11488376
  27. 27. Vuilleumier P. How brains beware: neural mechanisms of emotional attention. Trends Cogn Sci. 2005; 9(12): 585–94. pmid:16289871
  28. 28. Calder J, Young A, Keane J, Dean M. Configural information in facial expression perception. J Exp Psychol Hum Percept Perform. 2000; 26(2), 527–551. pmid:10811161
  29. 29. Adolphs R, Gosselin F, Buchanan TW, Tranel D, Schyns P, Damasio AR. A mechanism for impaired fear recognition after amygdala damage. Nature. 2005; 433: 68–72. pmid:15635411
  30. 30. Whalen PJ, Shin LM, McInerney SC, Fischer H, Wright CI, Rauch SL. A functional MRI study of human amygdala responses to facial expressions of fear versus anger. Emotion. 2001; 1: 70–83. pmid:12894812
  31. 31. Morris J, deBonis M, Dolan R. Human amygdala responses to fearful eyes. NeuroImage. 2002; 17(1) 214–222. pmid:12482078
  32. 32. Motley M, Camden C. Facial expression of emotion: A comparison of posed expressions verses spontaneous expressions in an interpersonal communication setting. West J Speech Commun. 1988; 51(1); 1–22.
  33. 33. Adolphs R, Tranel D. Impaired judgements of sadness but not happiness following bilateral amygdala damage. J Cognitive Neurosci. 2004; 16: 453–462.
  34. 34. Calder AJ, Young AW, Perrett DI, Etcoff NL, Rowland D. Categorical perception of morphed facial expressions. Vis Cogn. 1996; 3: 81–118.
  35. 35. Hess U, Blairy S, Kleck RE. The Intensity of emotional expressions and decoding accuracy. J Nonverbal Behav. 1997; 21: 241–257.
  36. 36. Rotshtein P, Richardson MP, Winston JS, Kiebel SJ, Vuilleumier P, Eimer M, Dolan R J. Amygdala damage affects event-related potentials for fearful faces at specific time windows. Hum Brain Mapp. 2010; 31: 1089–1105. pmid:20017134
  37. 37. Hoffmann H, Kessler H, Eppel T, Rukavina S, Traue HC. Expression intensity, gender and facial emotion recognition: Women recognize only subtle facial emotions better than men. Acta Psychologica. 2010; 135: 278–283. pmid:20728864
  38. 38. Guo K. Holistic gaze strategy to categorize facial expression of varying intensities. PLoS ONE. 2012; 7(8): e42585. pmid:22880043
  39. 39. Adams R, Hess U, Kleck R. The intersection of gender-related facial appearance and facial displays of emotion. Emot Rev. 2015: 7(1); 5–13.
  40. 40. Brody LR, Hall J. Gender and emotion. In: Lewis M, Haviland J, editors. Handbook of emotions. New York: Guildford Press; 1993. pp. 447–460.
  41. 41. Hall JA, Carter JD, Horgan TG. Gender differences in nonverbal communication of emotion. In: Fischer AH, editor. Gender and emotion: Social psychological perspectives. Cambridge University Press; 2000. pp. 97–117.
  42. 42. Tucker JS, Riggio RE. The role of social skills in encoding posed and spontaneous facial expressions. J Nonverbal Behav. 1988; 12: 87–97.
  43. 43. Le Gal P, Bruce V. Evaluating the independence of sex and expression in judgments of faces. Percept Psychophys. 2002; 64(2); 230–243. pmid:12013378
  44. 44. Plant A, Kling K, Smith G. The influence of gender and social role on the interpretation of facial expressions. Sex Roles. 2004; 51(3/4): 187–196.
  45. 45. Zebrowitz L, Kikuchi M, Fellous J. Facial resemblance to emotions: Group differences, impression effects, and race stereotypes. J Pers Soc Psychol. 2010; 98(2): 175–189. pmid:20085393
  46. 46. Becker D, Kenrick D, Neuberg S, Blackwell K, Smith D. The confounded nature of angry men and happy women. J Pers Soc Psychol. 2007; 92(2): 179–190. pmid:17279844
  47. 47. Tucker JS, Friedman HS. Sex differences in nonverbal expressiveness: Emotional expression, personality, and impressions. J Nonverbal Behav. 1993; 17: 103–117.
  48. 48. Hampson E, Anders S, Mullin L. A female advantage in the recognition of emotional facial expressions: test of an evolutionary hypothesis. Evol Hum Behav. 2006; 27:401–416.
  49. 49. Montagne B, Kessels R, Frigerio E, de Haan E, Perrett D. Sex differences in the perception of affective facial expressions: Do men really lack emotional sensitivity? Cogn Process. 2005; 6(2): 136–141. pmid:18219511
  50. 50. Babchuk W, Hames R, Thompson . Sex differences in the recognition of infant facial expressions of emotion: The primary caretaker hypothesis. Ethol Sociobiol. 1985; 6(2): 89–101.
  51. 51. Baron-Cohen S. The extreme male brain theory of autism. Trends Cogn Sci. 2002; 6(6): 248–254) pmid:12039606
  52. 52. Hall JK, Hutton SB, Morgan MJ. Sex differences in scanning faces: Does attention to the eyes explain female superiority in facial expression recognition? Cogn Emot. 2010; 24: 629–637.
  53. 53. Hall J. Gender effects in decoding nonverbal cues. Psychol Bull. 1978; 85(4): 845–857.
  54. 54. Hall JA, Matsumoto D. Gender differences in judgments of multiple emotions from facial expressions. Emotion. 2004; 4: 201–206. pmid:15222856
  55. 55. Palermo R, Coltheart M. Photographs of facial expression: Accuracy, response times, and ratings of intensity. Behav Res Methods Instrum Comput. 2004; 36(4): 634–638. pmid:15641409
  56. 56. Rahman Q, Wilson G, Abrahams S. Sex, sexual orientation, and identification of positive and negative facial affect. Brain Cogn. 2004; 54: 179–185. pmid:15050772
  57. 57. Grimshaw G, Bulman-Fleming M, Ngo C. A signal-detection analysis of sex differences in the perception of emotional faces. Brain Cogn. 2004; 54(3):248–250. pmid:15050785
  58. 58. Vassallo S, Cooper SL, Douglas JM. Visual scanning in the recognition of facial affect: Is there an observer sex difference? J Vis. 2009; 9: 1–10.
  59. 59. Rotter NG, Rotter GS. Sex differences in the encoding and decoding of negative facial emotions. J Nonverbal Behav. 1988; 12(2): 138–147.
  60. 60. Tottenham N, Tanaka JW, Leon AC, McCarry T, Nurse M, Hare TA, Nelson C. The NimStim set of facial expressions: judgments from untrained research participants. Psychiat Res. 2009; 168: 242–249.
  61. 61. Gillespie SM, Rotshtein P, Wells LJ, Beech AR, Mitchell IJ. Psychopathic traits are associated with reduced attention to the eyes of emotional faces among adult male non-offenders. Front Hum Neurosci. 2015; 9:552. pmid:26500524
  62. 62. Dienes Z. Using Bayes to get the most out of non-significant results. Front Psychol. 2014; 5: 718.
  63. 63. Whalen PJ, Kagan J, Cook RG, Davis FC, Kim H, Polis S, McLaren DG, Somerville LH, McLean AA, Maxwell JS, Johnstone T. Human amygdala responsivity to masked fearful eye whites. Science. 2004; 306(5704): 2061. pmid:15604401
  64. 64. Schurgin MW, Nelson J, Iida S, Ohira H, Chiao JY, Franconeri SL. Eye movements during emotion recognition in faces. J Vis. 2014; 14(13):1–16
  65. 65. Kotsia I, Buciu I, Pitas I. An analysis of facial expression recognition under partial facial image occlusion. Image Vis Comput. 2008;26(7):1052–67.
  66. 66. Smith FW, Schyns PG. Smile through your fear and sadness transmitting and identifying facial expression signals over a range of viewing distances. Psycholo Sci. 2009; 20(10): 202–8.
  67. 67. Schmidt KL, Cohn JF. Human facial expressions as adaptations: Evolutionary questions in facial expression research. Am J Phys Anthropol. 2001; 33:3–24. pmid:11786989
  68. 68. Susskind JM, Lee DH, Cusi A, Feiman R, Grabski W, Anderson AK. Expressing fear enhances sensory acquisition. Nature Neurosci. 2008 Jul 1;11(7):843–50. pmid:18552843
  69. 69. Blair RJR, Mitchell DGV, Peschardt KS, Colledge E, Leonard RA, Shine JH, Murray LK, Perrett DI. Reduced sensitivity to others' fearful expressions in psychopathic individuals. Personality Indiv Differ. 2004; 37: 1111–1122.
  70. 70. Eisenbarth H, Alpers GW, Segrè D, Calogero A, Angrilli A. Categorization and evaluation of emotional faces in psychopathic women. Psychiat Res. 2008; 159: 189–195.
  71. 71. Muñoz LC. Callous-unemotional traits are related to combined deficits in recognizing afraid faces and body poses. J Am Acad Child Psy. 2009; 48: 554–562.
  72. 72. Marsh AA, Blair RJR. Deficits in facial affect recognition among antisocial populations: a meta-analysis. Neurosci Biobehav Rev. 2008; 32(3): 454–465. pmid:17915324
  73. 73. Gillespie SM, Mitchell IJ, Satherley RM, Beech AR, Rotshtein P. Relations of distinct psychopathic personality traits with anxiety and fear: Findings from offenders and non-offenders. PLoS ONE. 2015; 10(11), e0143120. pmid:26569411
  74. 74. Boraston Z, Blakemore S J, Chilvers R, Skuse D. Impaired sadness recognition is linked to social interaction deficit in autism. Neuropsychologia. 2007; 45: 1501–1510. pmid:17196998
  75. 75. Bediou B, Franck N, Saoud M, Baudouin JY, Tiberghien G, Daléry J, d'Amato T. Effects of emotion and identity on facial affect processing in schizophrenia. Psychiat Res. 2005; 133: 149–157.
  76. 76. Dadds MR, El Masry Y, Wimalaweera S, Guastella AJ. Reduced eye gaze explains “fear blindness” in childhood psychopathic traits. J Am Acad Child Psy. 2008; 47: 455–463.
  77. 77. Pelphrey KA, Sasson NJ, Reznick JS, Paul G, Goldman BD, Piven J. Visual scanning of faces in autism. J Autism Dev Disord. 2002; 32: 249–261. pmid:12199131
  78. 78. Loughland CM, Williams LM, Gordon E. Schizophrenia and affective disorder show different visual scanning behavior for faces: A trait versus state-based distinction? Biol Psychiat. 2002; 52: 338–348. pmid:12208641
  79. 79. Sullivan S, Ruffman T, Hutton SB. Age differences in emotion recognition skills and the visual scanning of emotion faces. J Gerontol B Psychol Sci Soc Sci. 2007; 62: 53–60.