Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Eye Gaze during Observation of Static Faces in Deaf People

  • Katsumi Watanabe ,

    kw@fennel.rcast.u-tokyo.ac.jp

    Affiliations Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, Japan, Human Technology Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan, Japan Science and Technology Agency, Saitama, Japan

  • Tetsuya Matsuda,

    Affiliation Brain Science Institute, Tamagawa University, Tokyo, Japan

  • Tomoyuki Nishioka,

    Affiliation Department of Synthetic Design, Tsukuba University of Technology, Tsukuba, Japan

  • Miki Namatame

    Affiliation Department of Synthetic Design, Tsukuba University of Technology, Tsukuba, Japan

Abstract

Knowing where people look when viewing faces provides an objective measure into the part of information entering the visual system as well as into the cognitive strategy involved in facial perception. In the present study, we recorded the eye movements of 20 congenitally deaf (10 male and 10 female) and 23 (11 male and 12 female) normal-hearing Japanese participants while they evaluated the emotional valence of static face stimuli. While no difference was found in the evaluation scores, the eye movements during facial observations differed among participant groups. The deaf group looked at the eyes more frequently and for longer duration than the nose whereas the hearing group focused on the nose (or the central region of face) more than the eyes. These results suggest that the strategy employed to extract visual information when viewing static faces may differ between deaf and hearing people.

Introduction

It has been hypothesized that deaf people may explore and see the visual world differently from hearing people because of their adaptation to hearing loss and/or consequential changes in communication strategy. Some studies have supported altered visual functions in deaf people, especially in the distribution and processes of visual attention [1],[6].

Facial processing is considered to be one of the fundamental visual processes necessary for successful social interaction. This is because, for sighted people, facial processing constitutes a basic skill for detecting and recognizing other people's emotional states. A few studies have shown that facial processing in deaf people might differ from that of hearing people. For example, McCullough & Emmorey [7] showed that American deaf people are better at detecting subtle differences in facial features (particularly around the eyes and mouth) and suggested that long-term experience in discriminating grammatical facial expressions used with American Sign Language (ASL) and lip-reading may contribute to enhanced detection of nuances in relevant facial features (see also [8],[9]).

Since high spatial resolution visual processes are possible only at the fovea, humans produce a series of foveal fixations to extract visual information [10], which are closely linked with overt visual attention [11]. With regard to facial processing, studies investigating eye movements have consistently found a systematic fixation sequence in which the eyes are not directed equally to all regions of a face but only to selected parts; i.e., mainly the eyes and mouth [12],[16].

Several studies that examined the eye movements of deaf people have found that they tend to look at facial regions in a similar magnitude as do hearing people [17],[–],[19]. For example, Muir & Richardson [17] conducted gaze-tracking experiments with deaf people watching sign language video clips and found that participants fixated mostly on the facial regions rather than on the hand movements of the signer, presumably to detect facial movements related to expression. In addition, Emmorey et al. [19] compared eye movements of beginning signers with experienced signers of ASL during ASL comprehension and found differences in fixation patterns: Beginning signers looked at facial regions around the signer's mouth while native signers fixated more on the areas around the eyes. Although these previous studies showed that there are minor differences in fixation patterns between certain groups, the sequence of fixation on the eyes and mouth has been considered to be a universal information extraction pattern.

Nevertheless, the idea of strictly universal facial processing has recently been challenged by several studies that investigated cultural influences on eye movements [20],[21]. Blais et al. [20] showed that Western Caucasian observers consistently fixated on the eye region and partially on the mouth area, confirming the triangular fixation pattern, whereas East Asian observers fixated more on the central region of the face (i.e., around the nose region). These results were interpreted by the authors in the context of cultural influences on visual environment affordance (analytic versus holistic processes [22]) and indicate that, even for face processing, strategies employed to extract visual information are shaped by experience (see also [23],[27]).

Since hearing loss imposes significant constraints on everyday life, it is possible that deaf people use a visual strategy that is different from that used by hearing people, which might lead to differences in scan paths. The triangular pattern of scan path during observations of faces has not been examined quantitatively in deaf people. The purpose of the current study was therefore to report differential scan paths between deaf and hearing people. We chose an emotional valence evaluation task with static faces because it was easy to understand and perform by both deaf and hearing participants.

Because our participants were Japanese (i.e., East Asian), we expected to observe the generally dominant fixations on the central region of the face (i.e., around the nose region) [20]. Then, there were several possibilities, besides that of no difference in eye movement pattern between deaf and hearing participants. Firstly, since deaf people communicate with sign languages, manually signed languages, and/or lip-reading, the mouth region would be of importance for deaf people and therefore fixated more during face observation. Secondly, eye contact is an imperative component of communication and this is more so in a deaf community [28]. Hence, fixations in the eye region might be more pronounced in deaf people. Thirdly, visual processing in deaf individuals exhibit more emphasis on the peripheral visual field [2],[6]. Therefore, in addition to the general tendency toward the nose region [20] [21], deaf individuals might make more eye movements in the parafoveal and peripheral regions, irrespective of whether the region is the main parts of faces (eyes and mouth) or not.

Methods

Ethics Statement

The procedures were approved by the internal review board of the Tsukuba University of Technology, and written informed consent was obtained from all participants prior to the testing.

Participants

We recruited 24 congenitally deaf Japanese people and 29 Japanese people with normal hearing function. Due to procedural failures during the experiment and/or spontaneous withdrawal from the study, data from 10 participants were excluded. The remaining participants comprised 20 congenitally deaf Japanese people (10 males and 10 females; mean age  = 21.7 years, standard deviation  = 0.75) and 23 Japanese people with normal hearing function (11 males and 12 females; mean age  = 24.6 years, standard deviation  = 3.11). All deaf participants were undergraduate students at Tsukuba University of Technology, where one of the entrance criteria is hearing loss of 60 dB or more. The deaf participants typically used manually signed Japanese and/or lip-reading for communication. None of the hearing participants were practiced in sign languages, manually signed languages, or lip-reading.

Stimuli

Stimuli were obtained from a commercially available database (the ATR face database DV99; ATR-Promotions, Inc.) and consisted of 4 male and 4 female Japanese identities expressing 10 different expressions (neutral [NE], fear [FE], happiness with the mouth opened [HO], happiness with the mouth closed [HC], sadness [SD], surprise [SP], anger with the mouth opened [AO], anger with the mouth closed [AC], disgust [DI], and contempt [CT]). The images were displayed on a 17-inch LCD monitor and viewed at a distance of about 55 cm, subtending 20 degrees of visual angle (27 cm) vertically and 36 degrees of visual angle (54 cm) horizontally. Each face image was centrally located and about 20 cm in height, which represents the size of a real face. Approximate positions of the eyes and mouth were aligned. Presentation of stimuli was controlled by Tobii Studio software (ver. 2.1.12, Tobii Technology, Stockholm, Sweden).

Eye tracking

Eye movements were recorded at a sampling rate of 60 Hz with the Tobii T-60 eye-tracker (Tobii Technology), which has an average gaze position error of 0.5 degrees and near-linear output over the range of the monitor used. Only the dominant eye of each participant was tracked although viewing was binocular. A manual calibration of eye fixations was conducted at the beginning of each session using a 9-point fixation procedure as implemented in the Tobii Studio software, and drift correction was performed for each trial.

Procedure

Participants were informed that they would be presented with a series of face pictures in order to evaluate the emotional valence of each face stimulus shown. Before each trial, participants were instructed to fixate on a cross at the center of the screen to perform an automatic drift correction. The participant initiated each trial by pressing a space bar. After a 2-s fixation period, a face was presented for 3 s. Then, the evaluation display appeared, and the participants used a computer mouse to click on the emotional valence of the face in the picture (out of a 7-point positive-negative scale with 1 being most positive). Evaluation was not speeded. Upon the participant's click of the mouse, the next trial began. A session consisted of 3 training trials with neutral expressions followed by 144 test trials. For the test trials, each combination of 8 identities and 9 expressions (except for NE) was presented once (72 trials), and each neutral expression of 8 identities was repeated 9 times (72 trials). The presentation of face stimuli was randomized.

Data analysis

The rating scores of emotional valence were first averaged for each expression by each participant. The mean rating scores were then grouped by the combination of hearing loss and participants' gender. A 3-way analysis of variance (ANOVA) was conducted to assess statistical significance, with hearing loss (deaf versus hearing) and participants' gender (male versus female) as between-group factors and facial expression of the stimulus as a within-group factor.

For each participant, we calculated the time that they fixated (fixation duration) and the number of fixations (fixation frequency) on the following areas of interests (AOIs): the eyes, the nose, and the mouth. AOIs were defined for each face (Figure 1). To control for differences in the sizes of AOIs, we normalized the fixation duration by the area of the AOI so that the sum of relative fixation duration would be 1 for each trial (relative fixation duration). The same normalization was performed for fixation frequency to calculate relative fixation frequency. Relative fixation duration and relative fixation frequency on the different AOIs were averaged separately for expressions within each participant. The averages were then grouped by combining hearing loss and participants' gender separately for AOIs. The relative fixation duration on AOIs was entered into a 4-way ANOVA, with hearing loss and participants' gender as between-group factors and facial expression and AOIs as within-group factors. The same ANOVA was conducted on the relative fixation frequency.

thumbnail
Figure 1. Example of areas of interest (AOIs).

For each face stimulus, we defined AOIs: the eyes, nose, and mouth. In order to control for differences in sizes of AOIs, the fixation duration and fixation frequency were normalized by the AOI (relative fixation duration and relative fixation frequency) so that the sum of fixation duration and that of fixation frequency would be 1 for each trial.

https://doi.org/10.1371/journal.pone.0016919.g001

Results

Evaluation of emotional valence

The averaged rating scores of emotional valence are shown in Figure 2. Face stimuli with happy expressions (HO and HC) were evaluated positively while face stimuli with fear (FE), sad (SD), angry (AO and AC), disgust (DI), and contempt (CT) expressions tended to be rated negatively. Faces with neutral (NE) and surprised (SP) expressions were evaluated, on average, neither positively nor negatively. Three-way ANOVA showed that a main effect of expression (F(9,351) = 597.7, P<0.001) was significant while the main effects of hearing loss and participants' gender were not significant (F(1,39 = 0.3, P = 0.60, F(1,39) = 1.2, P = 0.29, respectively). No interaction reached a significant level (F<1.3, P>0.23). These results suggest that the participants evaluated the emotional valence of the faces presented as stimuli consistently, irrespective of hearing loss and participants' gender.

thumbnail
Figure 2. Mean rating scores of emotional valence as a function of expression in face stimuli.

The face stimuli with happy expressions (HO and HC) were evaluated positively while the face stimuli with sad (SD), angry (AO and AC), disgust (DI), and contempt (CT) expressions were rated negatively. The faces with neutral (NE) and surprised (SP) expressions were evaluated neither positively nor negatively. NE =  neutral; HO =  happiness with the mouth opened; FE =  fear, AC =  anger with the mouth closed; CT =  contempt; DI =  disgust; SD =  sadness; HC =  happiness with the mouth closed; AO =  anger with the mouth opened; and SP =  surprise.

https://doi.org/10.1371/journal.pone.0016919.g002

Eye movements

Data from trials where no gazes were directed at AOIs (i.e., the eyes, nose, or mouth) were excluded from the following analysis, which were 0.8%, 0.4%, 0.4%, and 0.5%, for deaf male, deaf female, hearing male, and hearing female groups, respectively. There was no significant difference in the number of discarded trials (Fisher exact test; P = 0.32). Figure 3 depicts the relative fixation duration mapped onto an example image of neutral face separately summed for deaf participants (Figure 3a), normal-hearing participants (Figure 3b), female participants (Figure 3c), and male participants (Figure 3d). This figure suggests that the gaze patterns differed among the participant groups.

thumbnail
Figure 3. Total fixation duration mapped onto an example face image: (a) deaf participants, (b) hearing participants, (c) female participants, and (d) male participants.

Red regions represent the places where the participants' eyes stayed longer. The fixation patters differed among the participant groups.

https://doi.org/10.1371/journal.pone.0016919.g003

Figure 4a shows relative fixation duration as a function of AOI, averaged over all facial expressions within different combinations of participant groups. In general, participants tended to fixate on the eyes and nose longer than on the mouth. In addition, deaf participants looked at the eyes longer than the nose whereas normal-hearing participants gazed at the nose longer than the eyes (Figure 4b). The tendency to fixate longer on the eyes appeared to be stronger in females compared with male participants (Figure 4c). Figure 5 shows the relative fixation duration for the different AOIs as a function of facial expression averaged over all participants. The differential relative fixation durations for different AOIs were apparent; i.e., fixation duration on the eyes and the nose was longer than on the mouth. In addition, the pictures of faces with neutral expressions appeared to lead to longer fixation on the eyes in exchange for shorter fixation duration on the mouth.

thumbnail
Figure 4. Relative fixation duration.

(a) Relative fixation duration as a function of area of interest averaged over all facial expressions within different combinations of participant groups. Participants tended to fixate on the eyes and nose longer than on the mouth. (b) Relative fixation duration compared between deaf and hearing participants. The deaf participants looked at the eyes longer than the nose whereas the hearing participants gazed at the nose longer than the eyes. (c) Relative fixation duration compared between female and male participants. The female participants tended to fixate on the eyes longer than did the male participants.

https://doi.org/10.1371/journal.pone.0016919.g004

thumbnail
Figure 5. Relative fixation duration for the different areas of interest as a function of facial expression, averaged over all the participants.

The faces with neutral expression led to longer fixation duration on the eyes. NE =  neutral; HO =  happiness with the mouth opened; FE =  fear, AC =  anger with the mouth closed; CT =  contempt; DI =  disgust; SD =  sadness; HC =  happiness with the mouth closed; AO =  anger with the mouth opened; and SP =  surprise.

https://doi.org/10.1371/journal.pone.0016919.g005

Four-way ANOVA revealed significant main effects of participants' gender (F(1,39) = 8.7, P<0.01; female > male), AOI (F(2,78) = 26.8, P<0.001; post hoc Ryan's method, eyes  =  nose > mouth, P<0.001), and expression (F(9,351) = 5.6, P<0.001). A significant interaction between hearing loss and AOI (F(2,78) = 5.3, P<0.01) and between expression and AOI (F(18,702) = 2.8, P<0.001) were found. There were also significant interactions between hearing loss and expression (F(9, 351) = 2.0, P<0.05) and among participants' gender, expression, and AOI (F(18, 702) = 1.7, P<0.05). Analyses of simple main effect indicated that the normal-hearing group looked at the nose longer than the eyes whereas the deaf group tended to look at the eyes more than the nose (P<0.05). The interaction between expression and AOI was mainly due to the fact that participants fixated longer on the eyes in pictures of neutral faces than in pictures of faces with other expressions (P<0.05). The data of relative fixation frequency corroborated the results of relative fixation duration (Figures 6 and 7).

thumbnail
Figure 6. Relative fixation frequency.

(a) Relative fixation frequency as a function of area of interest averaged over all facial expressions within different combinations of participant groups. (b) Relative fixation frequency compared between deaf and hearing participants. (c) Relative fixation frequency compared between female and male participants. The results for fixation frequency corroborated those of fixation duration.

https://doi.org/10.1371/journal.pone.0016919.g006

thumbnail
Figure 7. Relative fixation frequency for the different areas of interest as a function of facial expression averaged over all the participants.

The results for fixation frequency corroborated those of fixation duration. NE =  neutral; HO =  happiness with the mouth opened; FE =  fear, AC =  anger with the mouth closed; CT =  contempt; DI =  disgust; SD =  sadness; HC =  happiness with the mouth closed; AO =  anger with the mouth opened; and SP =  surprise.

https://doi.org/10.1371/journal.pone.0016919.g007

An additional analysis was performed to test whether the fixation duration and fixation frequency outside the AOIs differ among participant groups. Whereas significant main effects of participants' gender were found (male > female; fixation duration, F(1,39) = 13.6, P<0.01; fixation frequency, F(1,39) = 15.1, P<0.01), no statistical difference was observed between deaf and hearing participants (fixation duration, F(1,39) = 0.14, P = 0.7; fixation frequency, F(1,39) = 1.14, P = 0.29), corroborating the results of 4-way ANOVA.

Discussion

In the present study we examined the possible difference in the pattern of eye movements between congenitally deaf and normal-hearing Japanese individuals while they evaluated the emotional valence of static faces. The results can be summarized as follows: (1) The emotional valence of face stimuli were evaluated consistently irrespective of hearing loss and participants' gender; (2) participants fixated (in terms of frequently and duration) on the eyes and nose more than on the mouth (the main effect of AOI), confirming overall fixation dominance on the eyes; (3) female participants tended to look at the main facial parts (i.e., the eyes, nose, and mouth) more than did male participants (the main effect of participant's gender); (4) faces with neutral expressions induced fixations on the eyes more than did faces with other expressions (the interaction between expression and AOI); and (5) deaf participants looked at the eyes more than the nose whereas normal-hearing participants tended to look more at the nose (the interaction between hearing loss and AOI).

It has been reported that females have an advantage in decoding nonverbal emotion [29],[35] and that females look more at the main parts of the face than do males, with particular emphasis on the eyes [34],[35]. The main effects of participant's gender supported this notion. Although the interaction between participants' gender and AOI and the interaction among participants' gender, hearing loss, and AOI did not reach a significant level, our data clearly showed a tendency in the female participants to fixate on the eyes (Figure 4c and Figure 6c). Thus, the present results may be taken as evidence supporting a gender difference in fixation pattern for faces with emotional expressions [34],[35].

Irrespective of participant group, faces with neutral expressions tended to produce more fixations on the eyes than did faces showing other expressions. Faces with emotional expressions have distinct features that help observers to interpret the expression. On the other hand, neutral faces are ambiguous and lack the visual cues for comprehension of emotion. It has been suggested that understanding and communication of emotion depends greatly on the visual processing of the eye region [36],[39]. Therefore, it is possible that people fixate more on the eyes of ambiguous neutral faces in an attempt to discern emotional clues. However, it should be stated that there was a possible confound in the present experiment that the neutral faces were repeated 9 times while the others were presented once and the interaction between expression and AOI may be due to repetition rather than expression. Further investigations are warranted to examine whether less emotional facial expressions indeed lead to more fixation on the eye region. Specifically, a future study should avoid the possible confound between viewing less emotional face expressions and repeated viewing.

The main focus of the present study was to investigate potential differences in fixation patterns between deaf and hearing participants. The hearing participants in the present study looked at the nose (i.e., central) region most rather than at the eye region. Since all the participants in the current study were Japanese, this may be attributed to a cultural influence on eye movement. Blais et al. [20] reported that Western Caucasian observers consistently fixated the eye region, and partially the mouth, whereas East Asian observers fixated more on the central region of the face to extract information from faces. They hypothesized that this difference is due to the social norm in East Asian cultures that direct or excessive eye contact may be considered rude [40] and to the difference in cognitive strategy (holistic/analytic approach to visual information: [22],[41]). On the other hand, our Japanese deaf participants looked at the eye region most, closer to the fixation pattern of Western Caucasians in Blais et al [20]. It has been reported that in a deaf community, eye contact is vital for communication because avoiding eye contact disrupts communication more profoundly than it does in sighted communities [28]; this holds true for a Japanese deaf community. Therefore, it is possible that the increased fixation on the eye region in our Japanese deaf participants may reflect their communication strategy. In this sense, the present study may be taken as an extension of Blais et al. [20], showing that living in a specific community (more specifically, deaf community in Tsukuba University of Technology in Japan) might alter how we look at faces (also see [23],[27]).

The underling mechanism for differential scan paths between deaf and hearing individuals remains to be clarified. However, one possible mechanism is the altered distribution and processes of visual attention [2],[5]. Deaf individuals are more distracted by visual information in the parafovea and periphery [5]. Since there was no difference in fixation duration and frequency outside the AOIs and no increase of fixation in the mouth region, the present finding cannot be explained solely by the attention emphasis on the peripheral processing. However, it is still possible that altered peripheral visual attention and scrutinizing strategy for faces may interact to produce the differential scan paths.

Limitations of the present study

Although the difference in fixation pattern was clear, it should be noted that the present study has considerable limitations. One limitation is that the stimuli used in the present study were static, rather than dynamic, stimuli. Many studies of emotional expression have used static face stimuli. Yet, facial expressions are highly dynamic, and thus, static stimuli represent unnatural snapshots of them. Recent studies on dynamic facial expressions have shown that visual processes for facial expressions are essentially tuned to dynamic information [36],[42],[43]. Evidence supporting this notion comes from facilitative effects of dynamic presentation on facial processing [44],[50] and enhanced neural activities for dynamic, as opposed to static, face stimuli [51],[–],[53]. Therefore, it is likely that the pattern of results would be different if dynamic stimuli were used. In particular, the relatively less fixations in the mouth region might be due to the use of the static face stimuli. It has been shown that the mouth region conveys useful information for emotion discrimination [54],[58], and this seems to be more so with dynamic face stimuli, e.g., [59].

Another limitation stems from the use of the evaluation task of emotional valence. Many previous studies have examined the scan paths during emotion discrimination and identification (e.g., [56],[57]) but little study has employed an evaluation task of emotional valence. Therefore, the present results may not be compared directly with those of the previous studies. Also, in order to elucidate the mechanism for valence evaluation and emotional processes, it is important to consider the relation between the time-course of evaluation processes and eye movement. The face stimuli used in the present study included some variations in visual information for emotional valence evaluation, which in turn would lead to different demands for different face stimuli. Since the decision was not timed, we did not know when the participants reached their decisions. Therefore, the eye movement pattern may reflect either pre-decision or post-decision processes or both.

The final limitation is the demographic peculiarity of the participants. It is possible that the use of sign language (Japanese Sign Language; JSL) leads to enhanced attention to the eye region because changes in eye configurations convey various syntactic distinctions and grammatical information in JSL as in ASL [60],[61]. However, until around 2002, most Japanese schools for the deaf emphasized oral education; i.e., teaching through lip-reading. Although manually signed Japanese (which is a signed form of the Japanese language) has recently started to be used in schools for the deaf, even now Japanese sign language is not officially taught. Therefore, it is difficult to infer whether the difference in fixation pattern is due to the hearing loss itself, to the extended use of sign language, and/or to the specific historical situation of Japanese deaf education.

Despite the above limitations, the present study showed the differential scan paths during observation of static face stimuli between deaf and hearing participants. Further investigations, preferably with speeded response or confidence/difficulty rating of decision, with dynamic stimuli, and with cross-cultural comparisons, will shed light on how and to what extent hearing loss influences how we look at faces and interpret others.

Acknowledgments

The use of materials generated through the use of the DB99 (Figure 1 and Figure 2) was licensed by the ATR Promotions.

Author Contributions

Conceived and designed the experiments: KW TM MN. Performed the experiments: MN. Analyzed the data: KW TN MN. Contributed reagents/materials/analysis tools: TM TN. Wrote the manuscript: KW MN.

References

  1. 1. Tharpe AM, Ashmead D, Sladen DP, Ryan HA, Rothpletz AM (2008) Visual attention and hearing loss: Past and current perspectives. Journal of the American Academy of Audiology 19: 741–747.
  2. 2. Stivalet P, Moreno Y, Richard J, Barraud PA, Raphel C (1998) Differences in visual search tasks between congenitally deaf and normally hearing adults. Cognitive Brain Research 6: 227–232.
  3. 3. Proksch J, Bavelier D (2002) Changes in the spatial distribution of visual attention after early deafness. Journal of Cognitive Neuroscience 14: 687–701.
  4. 4. Bavelier D, Dye MWG, Hauser PC (2006) Do deaf individuals see better? Trends in Cognitive Sciences 10: 512–518.
  5. 5. Dye MWG, Hauser PC, Bavelier D (2008) Visual skills and cross-modal plasticity in deaf readers: Possible implications for acquiring meaning from print. Annals of the New York Academy of Sciences 1145: 71–82.
  6. 6. Dye MWG, Hauser PC, Bavelier D (2009) Is visual selective attention in deaf individuals enhanced or deficient? The case of the useful field of view. PLOS One 4(5): e5640.
  7. 7. McCullough S, Emmorey K (1997) Face processing by deaf ASL signers: evidence for expertise in distinguishing local features. Journal of Deaf Studies and Deaf Education 2: 212–222.
  8. 8. Bettger J, Emmorey K, McCullough S, Bellugi U (1997) Enhanced facial discrimination: effects of experience with American sign language. Journal of Deaf Studies and Deaf Education 2: 223–233.
  9. 9. Kubota Y, Quérel C, Pelion F, Laborit J, Laborit MF, et al. (2003) Facial affect recognition in pre-lingually deaf people with schizophrenia. Schizophrenia Research 61: 265–270.
  10. 10. Yarbus AL (1965) Role of Eye Movements in the Visual Process. Moscow: Nauka.
  11. 11. Findlay JM, Gilchrist ID (2003) Active Vision - The Psychology of Looking and Seeing. Oxford, UK: Oxford University Press.
  12. 12. Walker-Smith G, Gale A, Findlay J (1977) Eye movement strategies involved in face perception. Perception 6: 313–326.
  13. 13. Janik SW, Wellens AR, Goldberg ML, Dell'Osso LF (1978) Eyes as the center of focus in the visual examination of human faces. Perceptual and Motor Skills 47: 857–858.
  14. 14. Groner R, Walder F, Groner M (1984) Looking at faces: local and global aspects of scanpaths. In: Gale AG, Johnson F, editors. Theoretical and applied aspects of eye movements research. Amsterdam: Elsevier. pp. 523–533.
  15. 15. Henderson JM, Williams CC, Falk RJ (2005) Eye movements are functional during face learning. Memory & Cognition 33: 98–106.
  16. 16. Armann R, Bülthoff I (2009) Gaze behavior in face comparison: The roles of sex, task, and symmetry. Attention, Perception & Psychophysics 71(5): 1107–1126.
  17. 17. Muir LJ, Richardson IEG (2005) Perception of sign language and its application to visual communications for deaf people. Journal of Deaf Studies and Deaf Education 10: 390–401.
  18. 18. De Filippo CL, Lamsing CR (2006) Eye fixations of deaf and hearing observers in simultaneous communication perception. Ear & Hearing 27(4): 331–352.
  19. 19. Emmorey K, Thompson R, Colvin R (2009) Eye gaze during comprehension of American sign language by native and beginning signers. Journal of Deaf Studies and Deaf Education 14: 237–243.
  20. 20. Blais C, Jack RE, Scheepers C, Fiset D, Caldara R (2008) Culture shapes how we look at faces. PLoS One 3: e3022.
  21. 21. Jack RE, Blais C, Scheepers C, Schyns PG, Caldara R (2009) Cultural confusions show that facial expressions are not universal. Current Biology 19: 1543–1548.
  22. 22. Chua HF, Boland JE, Nisbett RE (2005) Cultural variation in eye movements during scene perception. Proceedings of the National Academy of Sciences 102: 12629–12633.
  23. 23. Tanaka JW, Kiefer M, Bukach CM (2004) A holistic account of the own-race effect in face recognition: Evidence from a cross-cultural study. Cognition 93: B1–B9.
  24. 24. Michel C, Caldara R, Rossion B (2006) Same-race faces are perceived more holistically than other-race faces. Visual Cognition 14: 55–73.
  25. 25. Michel C, Rossion B, Han J, Chung CS, Caldara R (2006) Holistic processing is finely tuned for faces of our own race. Psychological Science 17: 608–615.
  26. 26. Yuki M, Maddux WW, Masuda T (2007) Are the windows to the soul the same in the East and West? Cultural differences in using the eyes and mouth as cues to recognize emotions in Japan and the United States. Journal of Experimental Social Psychology 43: 303–311.
  27. 27. de Heering A, Rossion B (2006) Prolonged visual experience in adulthood modulates holistic face perception. PLoS ONE 3: e2317.
  28. 28. Mindess A (2006) Reading between the Signs: Intercultural Communication for Sign Language Interpreters. Boston, MA: Intercultural Press.
  29. 29. Hall JA (1978) Gender effects in decoding nonverbal cues. Psychological Bulletin 85: 845–857.
  30. 30. Hall JA (1984) Nonverbal sex differences: Communication accuracy and expressive style. Baltimore: Johns Hopkins University Press.
  31. 31. Thayer JF, Johnsen BH (2000) Sex differences in judgement of facial affect: A multivariate analysis of recognition errors. Scandinavian Journal of Psychology 41: 243–246.
  32. 32. Hall JA, Matsumoto D (2004) Gender differences in judgments of multiple emotions from facial expressions. Emotion 4: 201–206.
  33. 33. Rahman Q, Wilson GD, Abrahams S (2004) Sex, sexual orientation, and identification of positive and negative facial affect. Brain and Cognition 54: 179–185.
  34. 34. Hall JK, Hutton SB, Morgan MJ (2009) Sex differences in scanning faces: Does attention to the eyes explain female superiority in facial expression recognition? Cognition & Emotion 24: 629–637.
  35. 35. Vassallo S, Cooper SL, Douglas JM (2009) Visual scanning in the recognition of facial affect: Is there an observer sex difference? Journal of Vision 9: 1–10.
  36. 36. Ekman P, Friesen WV (1975) Unmasking the face: A guide to recognizing emotions from facial clues. Englewood Cliffs, NJ: Prentice Hall.
  37. 37. Adolphs R, Gosselin F, Buchanan TW, Tranel D, Schyns P, et al. (2005) A mechanism for impaired fear recognition after amygdala damage. Nature 433: 68–72.
  38. 38. Adolphs R (2006) Perception and emotion: How we recognize facial expressions. Current Directions in Psychological Science 15: 222–226.
  39. 39. Calvo MG, Nummenmaa L (2008) Detection of emotional faces: Salient physical features guide effective visual search. Journal of Experimental Psychology: General 13: 471–494.
  40. 40. Argyle M, Cook M (1976) Gaze and Mutual Gaze. Cambridge, UK: Cambridge University Press.
  41. 41. Nisbett RE, Miyamoto Y (2005) The influence of culture: holistic versus analytic perception. Trends in Cognitive Sciences 9: 467–473.
  42. 42. Humphreys GW, Donnelly N, Riddoch MJ (1993) Expression is computed separately from facial identity, and it is computed separately for moving and static faces: Neuropsychological evidence. Neuropsychologia 31: 173–181.
  43. 43. Roy C, Blais C, Fiset D, Gosselin F (2010) Visual information extraction for static and dynamic facial expression of emotions: an eye-tracking experiment. Journal of Vision 10: 531.
  44. 44. Frijda NH (1953) The understanding of facial expression of emotion. Acta Psychologica 9: 294–362.
  45. 45. Berry DS (1990) What can a moving face tell us? Journal of Personality and Social Psychology 58: 1004–1014.
  46. 46. Bruce V, Valentine T (1988) When a nod's as good as a wink: The role of dynamic information in facial recognition. In: Gruneberg MM, Morris PE, Sykes RN, editors. Practical aspects of memory: Current research and issues (Vol. 1, pp. 169–174). New York: John Wiley & Sons.
  47. 47. Harwood NK, Hall LJ, Shinkfield AJ (1999) Recognition of facial emotional expressions from moving and static displays by individuals with mental retardation. American Journal of Mental Retardation 104: 270–278.
  48. 48. Lander K, Christie F, Bruce V (1999) The role of movement in the recognition of famous faces. Memory and Cognition 27: 974–985.
  49. 49. Wehrle T, Kaiser S, Schmidt S, Scherer KR (2000) Studying the dynamics of emotional expression using synthesized facial muscle movements. Journal of Personality and Social Psychology 78: 105–119.
  50. 50. Sato W, Yoshikawa S (2004) The dynamic aspects of emotional facial expressions. Cognition and Emotion 18: 701–710.
  51. 51. Sato W, Kochiyama T, Yoshikawa S, Naito E, Matsumura M (2004) Enhanced neural activity in response to dynamic facial expressions of emotion: An fMRI study. Cognitive Brain Research 20: 81–91.
  52. 52. Schultz J, Pilz KS (2009) Natural facial motion enhances cortical responses to faces. Experimental Brain Research 194: 465–475.
  53. 53. Trautmann SA, Fehr T, Herrmann M (2009) Emotions in motion: Dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion-specific activations. Brain Research 1248: 100–115.
  54. 54. Hanawalt NG (1944) The role of the upper and the lower parts of the face as the basis for judging facial expressions: II. In posed expressions and “candid camera” pictures. The Journal of General Psychology 31: 23–36.
  55. 55. Ekman P, Friesen WV (1971) Constants across cultures in the face and emotion. Journal of Personality and Social Psychology 17: 124–129.
  56. 56. Sullivan LA, Kirkpatrick SW (1996) Facial interpretation and component consistency. Genetic, Social, and General Psychology Monographs 122: 389–404.
  57. 57. Rutherford MD, Towns AM (2008) Scan path differences and similarities during emotion perception in those with and without autism spectrum disorders. Journal of Autism & Developmental Disorders 38: 1371–1381.
  58. 58. Schyns PG, Bonnar L, Gosselin F (2002) Show me the features! Understanding recognition from the use of visual information. Psychological Science 13: 402–409.
  59. 59. Koda T, Ruttkay Z, Nakagawa Y, Tabuchi K (2010) Cross-cultural study on facial regions as cues to recognize emotions of virtual agents. Lecture Notes in Computer Science 6259: 16–27.
  60. 60. Liddell S (1980) American sign language syntax. The Hague: Mouton Publishers.
  61. 61. Nakamura K (2006) Deaf in Japan: Signing And the Politics of Identity. Ithaca: Cornell University Press.