Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The Processing of Human Emotional Faces by Pet and Lab Dogs: Evidence for Lateralization and Experience Effects

  • Anjuli L. A. Barber ,

    anjuli.barber@vetmeduni.ac.at

    Affiliation Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna and University of Vienna, Vienna, Austria

  • Dania Randi,

    Affiliation Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna and University of Vienna, Vienna, Austria

  • Corsin A. Müller,

    Affiliation Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna and University of Vienna, Vienna, Austria

  • Ludwig Huber

    Affiliation Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna, Medical University of Vienna and University of Vienna, Vienna, Austria

Abstract

From all non-human animals dogs are very likely the best decoders of human behavior. In addition to a high sensitivity to human attentive status and to ostensive cues, they are able to distinguish between individual human faces and even between human facial expressions. However, so far little is known about how they process human faces and to what extent this is influenced by experience. Here we present an eye-tracking study with dogs emanating from two different living environments and varying experience with humans: pet and lab dogs. The dogs were shown pictures of familiar and unfamiliar human faces expressing four different emotions. The results, extracted from several different eye-tracking measurements, revealed pronounced differences in the face processing of pet and lab dogs, thus indicating an influence of the amount of exposure to humans. In addition, there was some evidence for the influences of both, the familiarity and the emotional expression of the face, and strong evidence for a left gaze bias. These findings, together with recent evidence for the dog's ability to discriminate human facial expressions, indicate that dogs are sensitive to some emotions expressed in human faces.

Introduction

An important perceptual-cognitive challenge for almost all animals is to find a compromise between 'lumping and splitting' when confronted with the vast amount of information arriving at their senses [1]. Animals need to categorize, i.e., to treat an object as belonging to a general class, but besides categorization of objects, they further need to recognize, i.e., to treat different images as depicting the same object, such as a specific face, despite changes in the viewing conditions. The brain of many animals solves both tasks in a natural, effortless manner and with an efficiency that is difficult to reproduce in computational models and artificial systems. Still, how this is accomplished is far from being fully understood.

An enormously rich source of information and an important category of visual stimuli for animals in all major vertebrate taxa is the face [2]. In many species faces convey, among other things, information about direction of attention, age, gender, attractiveness and current emotion of the individual (for reviews see [35]). Almost 50 years ago psychologists have demonstrated that facial expressions are reliably associated with certain emotional states [6,7]. This in turn can lead to behavioral decisions of a receiver concerning e.g. cooperation, competition, consolidation or formation of relationships. Some primates, for instance, seem to have evolved special abilities for reading faces due to their complex social lives (e.g. [810]).

While the categorization, interpretation and identification of faces from conspecifics is likely a consequence of the social life, which may have resulted in neural specialization for faces [11], it is a further challenge to achieve at least some of these abilities with faces of heterospecifics. The configuration of the face, the underlying facial muscles and the resulting expressions are more or less different from the own, depending on how taxonomically distant the other species is [12,13]. Nevertheless, a variety of animals have been shown to be able to identify and categorize faces and also emotions of heterospecifics (e.g. macaques [14], sheep [15], horses [16]). Several investigations on con- and heterospecific face processing suggest learned aspects in the information transfer and speed (c.f. face expertise, [1722]). Hereby individuals are able to recognize and discriminate best between faces similar to those that are most often seen in the environment. For instance, the influence of individual experience with the other species was shown for urban living birds like magpies [23] and pigeons [24]. Especially early exposure to faces facilitates the ability of face discrimination [25,26]. Although some individuals are able to learn about heterospecific faces also later in life (e.g. chimpanzees [27], rhesus macaques [28]), they will not reach the same high level of competence [12,21]. It remains an open question, however, whether the improved abilities to read faces due to early life exposure are caused solely by an acquired early sensitivity for faces (innate mechanisms) or simply by the larger amount of experience (learned responses).

An ideal model to study such questions about experience for heterospecific faces is the domestic dog. Pet dogs live in an enduring intimate relationship with humans, often from puppy age on. As reviewed in several recent books [2932], dogs have developed specific socio-cognitive capacities to communicate and form relationships with humans. Most likely caused by a mixture of phylogenetic (domestication) and ontogenetic (experience) factors, dogs living in the human household show high levels of attentiveness towards human behavior (reviewed in [33]), follow human gestures like no other animal, (e.g. [34]), exhibit a high sensitivity to human ostensive signals, like eye contact, name calling and specific intonation [35] and show an increased readiness to look at the human face [36]. In comparison to hand-reared wolf puppies, the latter is already present in dog puppies ([37]; see also [38]). By monitoring human faces, dogs seem to obtain a continuous stream of social information ranging from communicative gestures to emotional and attentive states. Even if this does not mean that dogs are readers of our minds but only exquisite readers of our behavior [39], they evaluate humans on the basis of direct experiences [40] and are sensitive to what humans can see (a form of perspective-taking; reviewed by Bräuer [41]).

A special sensitivity for faces in dogs is further supported by two recent findings. An fMRI study in awake dogs has revealed that a region in the temporal cortex is face selective (it responds to faces of humans and dogs but not to every-day objects, [42]). Dogs also have been found to show a gaze bias towards faces [43,44]. Still, there is mixed evidence for the lateralization towards conspecifics and heterospecifics. Dogs showed a human-like left gaze bias toward human faces but not toward monkey or dog faces [44]. This left gaze bias was exhibited only toward negative and neutral expressions, however, but not toward positive expressions of human faces [43]. It has been proposed that the right hemisphere is responsible for the expression of emotions and that the assessment of emotions therefore leads to a left gaze bias towards such stimuli (right hemisphere model, [45,46]). In contrast to the right hemisphere model, the valence model [47,48] suggests that the left hemisphere mainly processes positive emotions while the right hemisphere mainly processes negative emotions. So far a variety of species (e.g. humans[49,50], apes [51], monkeys [52], sheep [53], dolphins [54], dogs [43,5557], etc.) have been shown to show lateralization towards emotive stimuli (reviewed by Adolphs [58] and Salva [59]).

The dog's following of the human gaze and, more generally, the visual scanning of faces require the application of advanced technical tools. The most commonly used equipment in human psychophysical laboratories is the eye-tracker [60,61]. Besides the gaze-following study by Téglás and colleagues [62], two studies have so far used this technology for the investigation of the looking behavior of dogs. In a kind of proof-of-concept study, Somppi and colleagues [63] showed that dogs are looking with a higher frequency to the informative parts of a picture and that they direct more attention to conspecifics and humans compared to objects. In a follow-up study addressing the looking at faces, the researchers could demonstrate that dogs pay more attention to the eye region compared to the rest of the face [64]. The finding that the mean fixations were longer for conspecific compared to heterospecific (human) faces suggests that dogs process faces in a species-specific manner. The use of the eye-tracker revealed subtle differences in looking patterns between dogs living in different human environments: pet dogs and kennel dogs. Still, both types of dogs showed a preference for the own species, familiar pictures and the eye region. Overall, these findings suggest that the living conditions of dogs affect the way they look at us. However, it is not clear whether this is due to the limited time in close relationship with humans or to the development of human face processing during early life.

This eye-tracking study addresses this question by comparing the looking at human faces between pet and lab dogs. For both types of dogs the human environment during their whole life is known, especially the difference in the amount of exposure to humans and the quality of the human-dog relationship. A further way to investigate the role of individual experience with human faces is to compare the dogs' looking at familiar and unfamiliar faces. It is already known that dogs are able to discriminate between familiar and unfamiliar faces, either implicitly by exhibiting looking preferences [65,66] or by making the discrimination explicit in a two-choice task [67]. However, it is not yet known if dogs show preferences and respond to differences between familiar and unfamiliar faces only if forced to do so or if dogs spontaneously scan familiar faces and unfamiliar faces differently.

A further interesting but not yet investigated influence on the dog's looking pattern at human faces is the facial expression as a result of the human's emotion. Recently we could show—confirming and extending the findings of Nagasawa and colleagues [68]–that dogs can learn to distinguish between two facial expressions of the same unfamiliar human persons and to generalize this ability to novel faces [69]. Importantly, our study suggested this ability is not dependent on the exploitation of simple features like the visibility of teeth but rather on the memory of how different human facial expressions look like. These memories are likely being formed in everyday interactions with humans, especially with the dogs' human partner(s).

Therefore, in addition to possible effects of experience (pet dogs vs. lab dogs, familiar faces vs. unfamiliar faces), this study focusses on the question on how dogs process emotional faces (angry, happy, sad, and neutral). In line with a previous eye-tracking study [64], we expected that lab dogs, due to their limited experience with humans, would show longer latencies, shorter fixation durations and a tendency to fixate more on familiar faces compared to pet dogs. Further we hypothesized that there is a difference between the processing of a familiar and unfamiliar face as the emotional expressions of a familiar face should be processed faster due to previous experience with the face. Based on the findings of our recent study [69], we expected that the four emotional expressions are processed differently and elicit a different amount of attention (i.e. differing amount of fixations and fixation durations). Additionally, in this previous study dogs required more training to discriminate happy and angry faces if they had to touch the angry face. This is in contrast to dogs that had to touch the happy face. The results indicate that they avoided approaching the angry faces. Therefore we expected that they fixate faces that are expressing negative emotions shorter and less often. Concerning the scanning pattern, we expected that more informative parts of the face (eye and mouth region) are fixated more frequently. Finally, we expected the dogs to show a gaze bias towards the faces.

Methods

All experimental procedures were approved in accordance with GPS guidelines and national legislation by the Ethical Committee for the use of animals in experiments at the University of Veterinary Medicine Vienna (Ref: 09/08/97/2012). All dog owners gave written consent to participate in the study. The individual in the figures of this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details.

Subjects

Nineteen privately owned pet dogs and eight laboratory dogs participated in the study (see Table 1). The pet dogs were of various breeds and their age ranged between 1–12 years. Dog owners of the pet dogs spent on average 27 h/week actively with their dogs (walking, playing and training). With the exception of two dogs, all dogs were at least one to two times a week active in dog sports activities like agility, dog dance, assistant dog training, man trailing etc. The two remaining dogs had only training as a “Begleithund” and obedience training on a daily basis by their owners. Their owners gave written consent to participate in the study. The laboratory dogs were all beagles, aged 1–5 years, and were housed on the campus of the University of Veterinary Medicine Vienna in packs of either 13 (Clinical Unit of Internal Medicine Small Mammals (CU-IM), L1-L6 in Table 1) or 4 dogs (Clinical Unit of Obstetrics, Gynecology and Andrology (CU-OGA), L7 and L8 in Table 1). None of the laboratory dogs ever had done any activity in dogs sport or ever participated in professional obedience training. They were born at the Clinical Unit of Obstetrics, Gynecology and Andrology at the University of Veterinary Medicine Vienna. After weaning off from the mother at the age of two month, the dogs were living in packs. The housing of both groups consisted of an indoor and outdoor enclosure (CU-IM: indoor enclosure = 64qm, outdoor enclosure = 248qm; CU-OGA:indoor enclosure = 18qm, 3 outdoor enclosures = ~70qm), in which they could move freely during the day. From 22:00–6:00 o´clock the dogs were kept in the indoor enclosures. The enclosures were enriched with dog toys (chewing, squeaking toys or balls). Their contact to humans was limited to the daily feeding (once a day in the morning, water ad libitum) and cleaning of the enclosures of the animal keepers. Additionally they were participating in practical courses for the training of students of veterinary medicine (approx. 1–2 times a semester). However, none of the dogs was socialized in a classical way, as they were living only with their conspecifics, but not with humans.

Experimental setup

All experiments were conducted at the Clever Dog Lab of the University of Veterinary Medicine, Vienna. The experimental room was divided by a wall, consisting of a projection screen (200 x 200 cm) and two doors, into two compartments; a small one (149 x 356 cm) housing the computer system operating the eye tracker and a video projector (NEC M300XS, NEC Display Solutions, United States), and a larger one (588 x 356 cm) with the chin rest device and the eye tracker (see Fig 1). The stimuli were back-projected onto the projection screen (size projection area 110 x 80 cm). We used the eye tracking system Eyelink 1000 (SR Research, Ontario, Canada) to record monocular data from the subjects. This system is ideal for dog research because it can be used with a chin rest or without any head support and with a remote camera if head fixation is not desirable, but high accuracy and resolution are still important. We used a customized chin-rest device for head stabilization. A pillow with a v-shaped depression was mounted on a frame to allow vertical adjustment of the chin rest to the height of the individual dog. The frame consisted of aluminium profiles (MayTec Aluminium Systemtechnik GmbH, Germany) that allowed the easily adjustable but stable fixation of additional equipment (e.g. cameras). The chin rest was positioned at a distance of 200 cm from the projection screen. The eye-tracking camera with the infrared illuminator was mounted on an extension of the chin-rest frame (see Fig 1). This apparatus was aligned horizontally with the chin rest at a distance of 50 cm. Light conditions in the room were kept constantly at 75 lux using LED-light bulbs (9,5W, 2700k, Philips GmbH Market DACH, Germany).

thumbnail
Fig 1. Experimental set up.

(a) Lateral view of chin rest device, projection screen and feeding machines seen from the rear of the room during training with geometrical figures and (b) front view of the chin rest device including the eye-tracking camera with IR illuminator.

https://doi.org/10.1371/journal.pone.0152393.g001

Stimuli

The stimulus set for each subject consisted of photographs of the face of the dogs' owner (for the lab dogs the dog trainer, who trained them for the experimental tests) and of an unfamiliar person expressing four different emotions: neutral, happy, angry and sad (Fig 2). All owners and all unfamiliar humans were female. The photographs were taken by experimenter 1 (E1) in a standardized setup at the Clever Dog Lab in front of a white background. For each dog the faces of the owner served as familiar stimuli and the faces of another dog´s owner participating in the study were used as unfamiliar stimuli. For the unfamiliar stimuli, a person was chosen who did not differ strongly from the owner in appearance (e.g. hair color). The stimuli were projected onto the screen with 1024 x 768 px resolution, resulting in 900 x 850 mm sized pictures which corresponds to 25.36° x 23.99° of visual angle (head: approx. 400 x 300 mm corresponding to11.42° x 8.58° of visual angle).

The stimuli were validated concerning their emotional expression and valence afterwards by 67 neutral observers via an online-questionnaire. All participants were recruited via the internet (17 male, 50 female, age range 16–55 years). The participants were asked for every picture to rate the emotion angry, happy, neutral and sad on scale from 0–100% (e.g. Picture xy = 0% angry, 0% happy, 30% neutral, 70% sad, the four numbers had to add to 100%). For every picture data of all ratings for the certain emotions were pooled and it was decided if the picture was identified correctly. Identification of the correct emotion was given when rated as the predominantly shown emotion, higher at least 15% than the second highest rated emotion, e.g. if an angry picture was rated with 62% angry, 0% happy, 23% neutral and 13% sad it was considered as correctly identified. In contrast, if the same picture was rated with 52% angry, 0% happy, 38% neutral and 10% sad would not have been considered as correctly identified. Validation revealed that for the angry condition 74%, happy 95%, neutral 100% and sad 60% of the pictures were identified correctly. The negative emotions were usually mistaken with a neutral expression (angry in 80% of the cases, sad in 88% of the cases). Incorrect rated pictures were left in the analysis as we wanted the dog owners to express emotional expressions as natural as possible. This resulted in expressions which could be ambiguous for non-informed observers.

Procedure

Training.

Prior to the experiment, the dogs were trained for the experimental procedure (for 2–6 months). Training was done by E1 and E2. We used operant conditioning with a positive reinforcement (food rewards) and a clicker as a secondary reinforcer. The dogs were trained once or twice a week, depending on their availability. The owner of the dog was allowed to stay in the room during the whole procedure. She was positioned on a chair at the back of the room and instructed to avoid interaction with the dog.

The training consisted of three steps. In the first step the dog was trained to go to the chin rest device and to rest the head on the chin rest for at least 10 sec (see Fig 1a). After the dog had learned to stay in the chin rest it was confronted with a two-choice conditional discrimination task with geometrical figures (GFs). With this task we aimed to increase the dog’s attention to the screen (up to 20 sec) and to make the training more interactive. Furthermore, this step was included to prepare the dog for the calibration procedure during which the dog had to follow the position of 3 dots on the screen (see below). In the last training step, pictures of landscapes, animals, architecture and humans (but not of human faces expressing emotions) were interspersed between the GFs to habituate the dog to the presentation of pictures and to increase the duration the dog could leave its head motionless on the chin rest further. These pictures varied in size between 50 x 50 and 100 x 100 cm. Additionally we introduced the presentation of small animated videos with sound (interspersed within the blocks of pictures) which were used as attention triggers during the test phase. Training was considered successful if the dog, in 4 out of 5 trials, stayed motionless on the chin rest with their head and oriented towards the screen for at least 30 seconds while the stimuli were presented. Orientation was determined by the experimenter via visual inspection aided by the on-the-fly output of the eye tracker.

Testing.

Testing was done by AB and DR. During the whole procedure the owner was allowed to stay at the back of the room and was instructed to avoid interaction with the dog. At the beginning of a test session, the dog was allowed to explore the room for approximately 5 min. Following that, the eye tracking system was calibrated with a three point calibration procedure. Thereby three small dots were presented one after the other on the screen (size of the dot: 50 mm, coordinates (x,y): (512,65), (962,702), (61,702)) while the dog had its head on the chin rest. When the dog focused on the dot for at least a second, the point was accepted by E1 on the operating computer. After calibration of the three points, a validation of the accepted calibration points followed. For this purpose all three calibration points were shown again. If the validation revealed an average deviation of more than 3° of visual angle between the two repetitions, the calibration was rejected and repeated until this criterion was met. Calibration was done before every session. Following successful calibration, the dog was presented with the stimuli as follows:

  1. Before the presentation of each picture, an attention trigger (small animated video) was presented on the screen. The position of the trigger was randomly assigned to one of the four corners of the screen (x,y: (150,150), (150,620), (875,150), (875,620)). The size of the trigger was 5 cm on the screen which corresponds to 1.43 degrees of visual angle. The presentation software of the eye tracker (Experimental Builder 1.10.1241, SR Research Ltd., Ontario, Canada) was programmed so that presentation of the stimuli followed immediately and automatically once the dog was fixating on the trigger.
  2. Faces were presented for five seconds in the center of the screen (x,y: (512,382)).
  3. Faces were presented in blocks of four trials (each trial consisted of the presentation of a trigger followed by the presentation of a face picture). The blocks contained pictures of either only positive (happy and neutral) or only negative (angry and sad) facial expressions. The order of the presentation was randomized within the block, but every block contained both corresponding emotional expressions (either positive or negative) of the familiar and the unfamiliar person.
    After the end of each block, the dog received a food reward. Depending on the dog´s motivation and concentration a short pause (up to 1 min) was made before the dog was asked to reposition itself in the chin rest device. During this pause the dog was allowed to rest or to move freely in the room.
  4. Within one session four blocks of pictures were presented. Two blocks contained positive and two blocks contained negative facial expressions. Each of the eight pictures of a dog’s stimulus set was shown twice in each session.

Every dog completed five sessions (one session per week) with the exception of three dogs for which we discontinued data acquisition due to a lack of motivation (for example if the dog did not want to go back to the chin rest anymore or did not stay in the chin rest during picture presentation). For this reason data acquisition with one dog was stopped after two sessions and for two other dogs after four sessions. The available data of these dogs was nevertheless included in the analyses (see data-preparation).

Data analysis

To be included in the analyses, data had to fulfil following criteria: Trials were only included in the analysis if the dog fixated at least once into the face. Additionally, a session was only retained if the dog had fixated at least once on a face of both emotional conditions (positive and negative). The data of a dog were discarded completely if it did not include at least two valid sessions (for further details see Table 1).

Each face was divided into five areas of interest (AoI): eyes, forehead, mouth, face rest (nose, cheeks and chin) and picture rest (hair and background, see Fig 3). Size and positioning of the AoIs was based on the following rules:

  1. AoI forehead: The forehead area was a rectangular area ranging upwards from the top of the eyebrows to the highest point of the hairline bounded by the face contours to the left and the right.
  2. AoI eyes: The eye area ranged from the middle of the pupil to the eyebrow. This area was mirrored downwards and bounded by the face contours to the left and the right.
  3. AoI mouth: The mouth region ranged from the middle of the mouth to the base of the nose. This area was mirrored downwards and bounded by the face contours to the left and the right.
  4. Face rest: Included all parts of the face area not belonging to AoI (1)-(3).
  5. Picture rest included all remaining parts of the picture that did not belong to AoI (1)-(4) (e.g. blank parts, hair, shoulder etc.)
thumbnail
Fig 3. Scheme showing the areas of interest (AoI) for forehead, eyes, face rest, mouth and rest of the picture.

https://doi.org/10.1371/journal.pone.0152393.g003

Fixations into the AoIs (1)-(4) are summarized as “in-face” fixations. Raw eye movement data was analysed using Data Viewer 2.1.1 (SR Research, Ontario, Canada). Eye movement events were identified by the EyeLink tracker´s on-line parser. As there is to date no validated literature on the definition of eye movement events in dogs, we were working with raw data without any thresholds for fixation duration. From the raw data we extracted fixation durations, number of fixations and latencies to the first fixation (fixation start). Note that fixation durations were not trimmed and durations could therefore exceed the presentation time of a trial if a dog was holding a fixation longer than the picture presentation. On the basis of these data we calculated mean values for fixation durations and number of fixations over each trial. As we were interested in face processing (rather than picture processing) only fixations into the face region were included in the analyses unless state otherwise. For this reason, we extracted the fixation duration of the first fixation directed into one of the four in-face AoIs, the number of fixations directed into in-face AoIs, the identity of the AoI of the first in-face fixation, the vector (angle) between the first and second fixation (gaze direction) and the latency of the first fixation into one of the in-face AoIs.

Statistical analyses were conducted with SPSS 22 (IBM Corp., Armonk, New York, United States) and R 3.1.1 [70]. Linear mixed effect models (LMM) were used to analyse mean fixation duration, fixation duration of the first fixation, overall fixation count per trial and the latency to the first fixation (time). As fixation counts into the four different AoIs did not fulfill the assumptions of a Poisson distribution (in particular zeros were strongly overrepresented), we used zero-inflated negative binomial models to analyze these data (number of fixations into the different AoIs), using the R-package glmmADMB 0.8.0 [71]. Likewise, the fixation number of the first fixation into one of the four in-face AoIs was analysed (latency of first fixation into in-face AoI, note that the first fixation was thereby labelled as zero). Generalized linear mixed models (GLMMs) were used to analyse if the first fixation was directed into the face region (SPSS, binomial GLMM) and which AoI was fixated first (first AoI, SPSS, multinomial GLMM). Using the R-package lme4 1.1.7 [72], we analysed if the first fixation was directed to the left or right half of the face (binomial, GLMM). Additionally we calculated the proportion of the fixation time dwelled on the left half of the face ((average duration left*100)/(average duration left-average duration right), negative binomial GLMM).

We could not find any difference between the emotions angry and sad as well as happy and neutral. For this reason, and to increase statistical power, the “positive” emotional expressions, happy and neutral, and the “negative” emotional expressions, angry and sad, were pooled. All linear models included emotion type (positive vs. negative), familiarity (familiar vs. unfamiliar), dog type (pet vs. lab dog) and where applicable AoI (eye, forehead, mouth, face rest), as well as all 2-way interactions between them, as fixed effects. Where 2-way interactions turned out significant, the dataset was split to explore the nature of the interaction. Dog type and session number nested within dog type were included as random factors in all models. Additionally we tested if the trial number (nested within session) and the age of the dog could explain a significant proportion of the variation in the models by adding these variables as random factors. Both variables did not have a significant influence on the data (decision on basis of Akaike´s information criterion (AIC)). We also tested for an effect of breed on the data. For this purpose, we combined data of dogs that are known to belong to the group of herding (P1-5, P7-9, P14, P19), hunting (P11, P12, P18, P15, L1-8) or protection dogs (P5, P10, P13; categorization on basis of their use as working dog). We compared the group of laboratory beagles (hunting dogs) to the three breed groups of the pet dogs. We did not find significant effect concerning the breed group. This factor was discharged from further analysis. Model reduction was done backwards on the basis of Akaike´s information criterion (AIC). Predictors were assumed to be significant with α ≤ 0.05.

Results

The final dataset included data from 17 pet dogs (11 male and 6 female) with a total of 66 sessions and 847 trials (percentage trials with positive emotion 48.4%, mean 50 trials/dog, ranging from 14 to 76) and 8 lab dogs with a total of 38 sessions and 517 trials (percentage trials with positive emotion 50.7%, mean 65 trials/dog, ranging 51 to 74).

Fixation Duration

Total fixation time per trial.

There was a significant difference between pet and lab dogs concerning the total time they spent fixating the picture (LMM: F1,19.6 = 7.27, p = 0.014). The lab dogs spent more time fixating the picture (5545 +/- 98 ms) than the pet dog (4701 +/- 72ms, see Fig 4). Also there was a significant interaction between dog type and emotion type (F1,1271.2 = 6.15, p = 0.013). The pet dogs looked longer at the negative compared to the positive emotions (F = 1,795.1 = 1.16, p = 0.28) whereas the lab dogs spent more time looking at the positive compared to the negative emotions (F1, 473.6 = 4.35, p = 0.04, see Fig 4).

thumbnail
Fig 4. Total fixation time (+/- 95% CI) per trial for pet and lab dogs and subdivided for the emotion type.

https://doi.org/10.1371/journal.pone.0152393.g004

Average fixation duration per fixation.

The dogs fixated the picture on average for 827 ms +/- SE 16ms. Average values did not differ between the pet (807 +/- 19ms) and lab dogs (858 +/- 28ms, LMM: F1,21.4 = 0.29, p = 0.59, see Table 1). The average fixation duration was not influenced by the emotion type (F1,1535,3 = 0.36, p = 0.55) or the familiarity of the face (F1,1529.5 = 1.55, p = 0.21). However, analysis of the fixation duration of the first fixation directed into the face revealed a significant difference between the two dog types: the first fixations of the lab dog (822+/- 40ms) were significantly longer than the first fixations of the pet dogs (582+/- 27ms, LMM: F1,19.7 = 4.55, p = 0.046, see Fig 5). We found no influence of emotion type (F1,668.5 = 0.98, p = 0.32), familiarity (F1,651.6 = 0.83, p = 0.36) or area of interest fixated (F3,685.1 = 0.91, p = 0.44) on the duration of the first fixation.

thumbnail
Fig 5. Average fixation duration (+/- 95% CI) of the first fixation directed into the face region for pet and lab dogs.

https://doi.org/10.1371/journal.pone.0152393.g005

Fixation Count

The mean number of fixations made during a trial (including fixations not directed into the face) was 8.07 +/- SE 0.14. This number did not differ between pet dogs (7.86 +/- 0.19) and lab dogs (8.43 +/- 0.23; LMM F1,21.2 = 0.65, p = 0.43). Also, there was no influence of emotion type (F1,324.9 = 0.16, p = 0.69) or familiarity (F1,1075.4 = 0.42, p = 0.52) on the mean number of fixations.

Less than half of all fixations directed to the picture were made into the face region (total = ~8, in-face = ~3, see Table 2). Still, there was a significant difference concerning the number of fixations made into the areas of interest during a trial (GLMM Likelihood Ratio Test (LRT) Chi²3 = 141.8, p< 0.001). The face rest region was fixated on average once per trial whereas the forehead region was fixated on average just half as often (c.f. Table 2). Eyes and mouth regions received on average the same number of fixations per trial (c.f. Table 2).

thumbnail
Table 2. Mean values of the fixation count into the areas of interest per trial.

https://doi.org/10.1371/journal.pone.0152393.t002

There was a significant interaction between AoI and dog type (LRT Chi²3 = 100.2, p<0.001). The mean fixation count of pet and lab dogs differed significantly for the mouth region (estimate = -0.8 +/- 0.32, z = -2.53, p = 0.011) but not for the eyes (GLMM estimate = -0.08 +/- 0.22, z = -0.36, p = 0.72), the forehead (estimate = 0.42 +/- 0.26, z = 1.61, p = 0.11) or the face (estimate = -0.02 +/- 0.14, z = -0.17, p = 0.86). On average, the pet dogs looked to the mouth region about once per trial (mean +/- SE; 0.82 +/- 0.04), whereas the lab dogs fixated the mouth region less than half as often (0.4 +/- 0.03; see Fig 6). There was a significant interaction between AoI and emotion type (LRT Chi²3 = 23.1, p<0.001). The forehead was fixated significantly more often when a positive expression was shown compared to a negative expression (estimate = 0.25 +/- 0.08, z = 3.0, p = 0.003) whereas the mouth was fixated significantly more often when a negative expression was shown compared to a positive expression (estimate = -0.18 +/- 0.08, z = -2.29, p = 0.02, see Fig 6). The fixation count to the other two AoIs (eyes, face) did not differ significantly between the two types of emotion.

thumbnail
Fig 6. Fixation count (Mean+/-SE) into an area of interest during a trial subdivided by dog type (a) and emotion type (b).

https://doi.org/10.1371/journal.pone.0152393.g006

Area of interest of the first fixation

Lab dogs looked more frequently into the in-face area with their first fixation (in 66% of all first fixations) than pet dogs (39%) (GLM LRT Chi²1 = 114.6, p<0.001; see also Table 3). The two dog types differed in the location of the face in which they looked first (multinomial GLMM F3,779 = 3.54, p = 0.014) and this measure was also influenced by the emotion type (F3,779 = 2.84, p = 0.037) and the familiarity (F3,779 = 6.54, p<0.001) of the stimulus. In comparison to the pet dogs, the lab dogs looked more likely first to the eyes or the forehead and less likely first to the mouth region (see Fig 7 left half). In addition, eyes and mouth were fixated first with a higher probability if the face expressed a negative emotion. The forehead and face rest were fixated first with a higher probability if the face expressed a positive emotion (see Fig 7). Further, there was a significant interaction between the emotion type and familiarity of the face (F3,779 = 6.54, p<0.001). For unfamiliar faces the probability to look first at the mouth was higher for negative than for positive expressions whereas the opposite pattern appeared for the familiar faces. Conversely, the probability to first fixate the forehead was higher for positive than for negative expressions when unfamiliar faces were shown but not when familiar faces were shown (see Fig 7 right half).

thumbnail
Fig 7. Frequencies [%] of the first fixations into the areas of interest subdivided for the emotion type with the subcategories dog type and familiarity.

https://doi.org/10.1371/journal.pone.0152393.g007

thumbnail
Table 3. Fixation count and probability of the first fixations into the face area (Face) and the rest of the picture (Picture) as well as the classification of fixations to the face into the areas of interest.

https://doi.org/10.1371/journal.pone.0152393.t003

Latency of the first fixation into the face

On average, the lab dogs fixated into the in-face area with the first fixation (1.05 +/- 0.02), whereas the pet dogs needed on average 1.5 +/- 0.09 fixations to do so. This difference was highly significant (GLMM F1,-398 = 15.42, p<0.001), but there was no influence of emotion type (F1,-397.3 = 3.23, p = 0.07), familiarity (F1,-397.3 = 1.03, p = 0.31) or area of interest (F1,-397.3 = 3.68, p = 0.3). Even though the pet dogs needed on average 1.5 fixations to fixate the in-face area they still looked sooner into the in-face area (on average after 8049 +/- 623ms) compared to lab dogs (on average after 11635+/- 1368ms). This difference was significant (LMM: F1,103.5 = 5.10, p = 0.026), but there was no influence of emotion type (F1,756.3 = 1.492, p = 0.222) or familiarity of the face (F1,746.8 = 0.901, p = 0.343).). If the first fixation into one of the four in-face AoIs went to the eyes or the mouth region, the latency to the first fixation was significantly shorter than if the first fixation went to the forehead or the face rest region (eyes: 8203 +/- 1007, mouth: 6875.7 +/- 896 vs. forehead: 11652.4 +/- 1960, face rest: 11330.9 +/- 1516; F3,779.003 = 3.892, p = 0.009, see Fig 8).

thumbnail
Fig 8. Mean fixation start (+/- 95% CI) of the first fixation directed into the areas of interest.

https://doi.org/10.1371/journal.pone.0152393.g008

Gaze Bias

The dogs showed a significant left gaze bias for the first fixation into the face (GLMM: estimate = 2.69 +/- 0.19, z = 14.12, p<0.001), with 92,5% of the first fixations being directed towards the right side of the face (in the dog's left visual field, see Fig 9). There was no significant difference between the dog types (estimate = -0.48 +/- 0.37, z = -1.29, p = 0.2). Additionally the gaze bias was neither influenced by the familiarity (estimate = -0.14 +/- 0.21, z = -0.66, p = 0.5) nor the emotion type (estimate = -0.04 +/- 0.2, z = -0.18, p = 0.9). Also the analysis of the fixation duration of all the fixations directed into the face region showed a significant preference for the left visual field (estimate = 4.62 +/- 0.45, z = 10.27, p<0.001). On average, 75% of the fixation dwell time was measured on the right side of the face. Also for this variable, there was no significant effect of emotion type (estimate = 0.16 +/- 0.45, z = 0.36, p = 0.7), familiarity (estimate = -0.37 +/- 0.45, z = -0.82, p = 0.4) or dog type (estimate = -0.49 +/- 0.68, z = -0.72, p = 0.4), but a tendency for an interaction between dog type and familiarity (estimate = -1.99 +/- 1, z = -1.95, p = 0.05). Lab dogs (estimate = -1.5 +/- 0.8, z = -1.88, p = 0.06) but not pet dogs (estimate = 0.48 +/- 0.64, z = 0.75, p = 0.5), tended to show a more pronounced left gaze bias towards familiar faces compared to unfamiliar faces.

thumbnail
Fig 9. Coordinates of the first fixation made into the face region subdivided for the dog type.

The origin corresponds to the middle of the vertical and horizontal dimension of the face.

https://doi.org/10.1371/journal.pone.0152393.g009

Discussion

In a nutshell, the findings of this study confirm all three major expectations about the processing of human faces by dogs: (1) an influence of the amount of exposure to humans and/or the quality of the human-dog relationship, as exemplified by the difference between pet and lab dogs; (2) (subtle) influences of both the familiarity and the emotional expression of the face; (3) a strong left gaze bias. Although in general these findings are in line with earlier studies on dogs' looking patterns towards human faces, there are some interesting deviations and a few unexpected results.

Concerning the looking differences between pet and lab dogs the former looked much earlier into the face region than the later. Possibly because (our) lab dogs live in packs and are therefore surrounded by other dogs but not humans, human faces are not salient enough to elicit a fast response. However, their very first fixation into the in-face region and the total looking time were longer compared to pet dog subjects. As a human face and especially human emotional expressions are not part of their daily visual environment, it is plausible that it takes lab dogs longer to process the facial expression. This is in line with the hypothesis of face-expertise, according to which face processing is facilitated by experience or familiarity with the stimulus [22,7375]. A shorter viewing time might indicate faster and therefore more experienced processing. An alternative explanation for the increased looking behavior of lab dogs is perceptual preference for the faces in general and some face regions, such as the forehead or eye region, in particular. Further tests with familiar and unfamiliar as well as preferred and non-preferred stimuli would be necessary to rule out these differences between the pet and lab dogs. Another interesting difference between the dog types has been uncovered in the analysis of the location of the first fixation. While lab dogs fixate firstly the eye and forehead region and later the mouth region, especially if the face shows a negative emotion, pet dogs allocate their first fixation equally into the eyes, the forehead and the mouth region. This result indicates that, besides the importance of the eye region, the mouth region is of very high importance for the pet dogs, especially if the face shows a negative emotion. It seems plausible that pet dogs are tuned to pay more attention to the region of the mouth as they are expecting verbal commands from their owners on a daily basis. However, we cannot exclude the possibility that these differences emerge from asymmetry in the distribution of breeds across the lab and pet dogs. It has been postulated that pet dogs, which emerge from working breeds, are tuned to human communicative signals. Further they outperform dogs of non-working breeds, which are not breed to cooperate, and ancient dog breeds, which additionally had limited access to humans [76]. Hence, it is plausible that dogs with an extended access to humans underlie an additional selection on cooperative behavior. However, when adding breed group as a factor to the statistical models, we could not find a significant difference between the groups of herding, hunting and protection dogs in comparison to the lab dogs, which are originally hunting dogs as well. Still, because of the small sample sizes of the different breed groups we cannot entirely exclude an effect of the breeds. Therefore it would be desirable for future studies to compare only homogeneous groups of dogs to control for possible influences of breed characteristics.

A slightly different picture emerges, if we consider the latency to fixate the different face regions. Here both dog types showed a similar pattern, namely a much shorter latency for the eyes and mouth region than for the forehead or the remaining parts of the face. If we assume this measure as being indicative of the dogs' primary interest for eyes and mouth, it would conform with previous investigations on humans and dogs [63,64,77,78]. The eyes and the mouth of the human face have been assumed being the most informative parts as they provide the primary communicative cues for human-dog communication.

Concerning the influence of the emotional expression of the human face on the dog's looking patterns we got mixed results. On the one hand, the data confirm an influence, as the fixation count was significantly higher for the forehead if a positive expression was displayed but higher for the mouth and eye region if a negative expression was displayed. On the other hand, the data are difficult to interpret, as they deviate from what is known from the literature. In humans the mouth in positive emotions and the eyes in negative emotions are receiving most attention [79] and these regions display the most characteristic changes during positive and negative emotional expressions, respectively (pulled up lip corners or lower eye lids, [80]). In the current study, the mouth received not much attention when dogs looked at faces with positive emotions, at least when using first fixation data. When looking at faces displaying negative emotions the eyes received much more attention than the mouth, especially in lab dogs, which correspond with the data from humans. With our scientific questions and resulting set up, it might be a possible weakness that we were not able to use human emotional expressions of validated data-bases. Instead we were using pictures of individuals that were not trained to express emotions in the face. This resulted in pictures that may have been ambiguous to some observers, and possibly also in ambiguous findings on the influence of the emotion on the face processing. Further investigations, particularly of prototypical faces with stimulus manipulations of the valence of the expressed emotions or a mixture of facial parts of different emotional expressions (e.g. “happy forehead” with “angry mouth”), would help clarifying these ambiguous findings.

A somewhat similar ambiguity emerged for interpreting the data indicating an influence of familiarity. Here again the influence is visible only in the first fixations. If unfamiliar faces but not familiar ones with a negative expression were shown dogs looked preferentially to the lower face region in their first fixation. We may therefore suggest a gaze aversion effect known from humans studies; adult humans exhibit a gaze aversion when confronted with threatening stimuli [81]. The fact that the dog subjects of our study showed this effect only when confronted with unfamiliar faces may be explained by their inability to retrieve positive associations with these faces. Therefore they may perceive this unfamiliar faces with negative expressions as not “trustworthy” or even threatening [64] and avoid the eye contact. Overall, unlike suggested by the findings of other studies [63,64,66], we could not find a clear preference for familiar faces, as they received the same number of fixations and were scanned for the same duration as unfamiliar faces. This result is surprising concerning the theory of face expertise which postulates that familiarity with a stimulus results in less and shorter fixation [18,19]. It is possible that the differences between the familiar and unfamiliar faces are too marginal or that due to a repeated exposure to the same stimuli the effect of familiarity was overwritten. Further investigations on stimuli processing of familiar and unfamiliar stimuli will be necessary to disentangle this result.

The strongest support for our expectations comes from the analysis of the lateral eye movements, resulting in looking into the left and right halves of the face. The data revealed a strong left gaze bias, i.e. looking preferentially into the right face hemisphere in the left visual field of the dog. Such a preference for the left visual field is associated with the engagement of the opposite, here the right brain hemisphere. This finding is not only very robust but in general also corroborates the results of previous studies [43,44]. In both studies dogs displayed a left gaze bias when viewing human faces with neutral expressions. In addition, a left gaze bias was also observed towards negative facial expressions, but not towards positive expressions [43]. The later results are somehow at odds with our data. We found no variations in their gaze bias between the four different emotional expressions. The dogs in this study showed the same strong bias also to faces with a positive emotional expression. However, the absence of a left gaze bias towards positive expressions in Racca et al. [43] may be due to their measuring of the gaze over an extended time period rather than the initial onset and by their methodological reliance on the analysis of video-based gaze directions. Our measurements are in line with studies on humans showing that the gaze bias towards emotive stimuli has a very early onset [44,82,83] and are based on the much more precise and objective eye-tracking method.

In general, our data are more in line with the Right Hemisphere Model [46], suggesting the regulation of emotional processes by the right hemisphere regardless of their valence, than the Valence Model [48], suggesting a left gaze bias only towards negative emotions. This result corresponds to findings on a variety of species, which show a lateralization towards emotive stimuli regardless of the valence (e.g. [5155,59]). As 'neutral' human faces, i.e., faces showing no facial muscle contraction, may be perceived as negative (cold or threatening) by humans [84], the strong left gaze bias also to 'neutral' faces does not weaken our conclusion that the dog subjects perceived the human faces as emotional. Lesion studies on human subjects showed that if the processing of the right hemisphere is disturbed, subjects are not able to recognize emotions [85,86]. Therefore it is likely that dogs, due to the significant left gaze bias in this study and their ability to discriminate human facial expressions [69], are indeed able to recognize emotional expressions in humans. However, further studies are necessary to clarify the nature of this recognition, i.e. which associations dogs have with different human emotions and which consequences they anticipate following their perception.

Supporting Information

Acknowledgments

We are grateful to Ester Müller, Simone Grohmann and Denise Ocampo for their help with the experiments, Peter Füreder and Wolfgang Berger for technical help, and Karin Bayer for administrative support. We would also thank the dog owners and dogs for participating in this study, and the Clinical Units of Internal Medicine—Small Mammals and Obstetrics, Gynaecology and Andrology for the permission to use their dogs.

Author Contributions

Conceived and designed the experiments: ALAB LH. Performed the experiments: ALAB DR. Analyzed the data: ALAB CAM. Contributed reagents/materials/analysis tools: LH. Wrote the paper: ALAB CAM LH.

References

  1. 1. Huber L. Categories and Concepts: Language-Related Competences Non-Linguistic Species. Encyclopedia of Animal Behaviour. Elsevier; 2010. pp. 261–266. https://doi.org/10.1016/B978-0-08-045337-8.00096-6
  2. 2. Leopold DA, Rhodes G. A comparative view of face perception. J Comp Psychol. 2010;124: 233–251. pmid:20695655
  3. 3. Bruce V, Young A. In the eye of the beholder: The science of face perception. 1st ed. Oxford University Press; 1998.
  4. 4. Peterson MA, Rhodes G. Perception of faces, objects, and scenes: Analytic and holistic processes [Internet]. Advances in visual cognition. 2003.
  5. 5. Tsao DY, Livingstone MS. Mechanisms of face perception. Annu Rev Neurosci. 2008;31: 411–437. pmid:18558862
  6. 6. Tomkins SS, McCarter R. What and where are the primary affects? Some evidence for a theory. Perceptual and motor skills. 1964. pp. 119–58.
  7. 7. Ekman P, Friesen WV, Ellsworth P. Emotion in the human face [Internet]. Elsevier; 1972. https://doi.org/10.1016/B978-0-08-016643-8.50001-X
  8. 8. Parr LA, de Waal FB. Visual kin recognition in chimpanzees. Nature. 1999;399: 647–8. pmid:10385114
  9. 9. Parr LA, Winslow JT, Hopkins WD, de Waal FB. Recognizing facial cues: individual discrimination by chimpanzees (Pan troglodytes) and rhesus monkeys (Macaca mulatta). J Comp Psychol. 2000;114: 47–60. pmid:10739311
  10. 10. Marechal L, Genty E, Roeder JJ. Recognition of faces of known individuals in two lemur species (Eulemur fulvus and E. macaco). Anim Behav. 2010;79: 1157–1163.
  11. 11. Kanwisher N, Yovel G. The fusiform face area: a cortical region specialized for the perception of faces. Philos Trans R Soc Lond B Biol Sci. 2006;361: 2109–28. pmid:17118927
  12. 12. Pascalis O, de Haan M, Nelson CA. Is face processing species-specific during the first year of life? Science. 2002;296: 1321–1323. pmid:12016317
  13. 13. Gothard KM, Brooks KN, Peterson MA. Multiple perceptual strategies used by macaque monkeys for face recognition. Anim Cogn. 2009;12: 155–167. pmid:18787848
  14. 14. Kanazawa S. Recognition of facial expressions in a Japanese monkey (Macaca fuscata) and humans (Homo sapiens). Primates. 1996;37: 25–38.
  15. 15. Tate AJ, Fischer H, Leigh AE, Kendrick KM. Behavioural and neurophysiological evidence for face identity and face emotion processing in animals. Philos Trans R Soc Lond B Biol Sci. 2006;361: 2155–2172. pmid:17118930
  16. 16. Proops L, McComb K, Reby D. Cross-modal individual recognition in domestic horses (Equus caballus). Proc Natl Acad Sci U S A. 2009;106: 947–951. pmid:19075246
  17. 17. Kendrick KM, Atkins K, Hinton MR, Heavens P, Keverne B. Are faces special for sheep? Evidence from facial and object discrimination learning tests showing effects of inversion and social familiarity. Behav Processes. 1996;38: 19–35. pmid:24897627
  18. 18. Althoff RR, Cohen NJ. Eye-movement-based memory effect: a reprocessing effect in face perception. J Exp Psychol Learn Mem Cogn. 1999;25: 997–1010. pmid:10439505
  19. 19. Barton JJS, Radcliffe N, Cherkasova MV, Edelman J, Intriligator JM. Information processing during face recognition: The effects of familiarity, inversion, and morphing on scanning fixations. Perception. Pion Ltd; 2006;35: 1089–1105.
  20. 20. Ryan CM, Lea SE. Images of conspecifics as categories to be discriminated by pigeons and chickens: Slides, video tapes, stuffed birds and live birds. Behav Processes. 1994;33: 155–75. pmid:24925244
  21. 21. Dufour V, Pascalis O, Petit O. Face processing limitation to own species in primates: A comparative study in brown capuchins, Tonkean macaques and humans. Behav Processes. 2006;73: 107–113. pmid:16690230
  22. 22. Diamond R, Carey S. Why faces are and are not special: an effect of expertise. J Exp Psychol Gen. 1986;115: 107–117. pmid:2940312
  23. 23. Lee WY, Lee S, Choe JC, Jablonski PG. Wild birds recognize individual humans: experiments on magpies, Pica pica. Anim Cogn. 2011;14: 817–25. pmid:21614521
  24. 24. Stephan C, Wilkinson A, Huber L. Have we met before? Pigeons recognise familiar human faces. Avian Biol Res. 2012;5: 75–80.
  25. 25. Rosa Salva O, Mayer U, Vallortigara G. Roots of a social brain: Developmental models of emerging animacy-detection mechanisms. Neuroscience and Biobehavioral Reviews. 2015. pp. 150–168. pmid:25544151
  26. 26. Rosa Salva O, Farroni T, Regolin L, Vallortigara G, Johnson MH. The evolution of social orienting: Evidence from chicks (gallus gallus) and human newborns. PLoS One. 2011;6.
  27. 27. Parr LA, Dove T, Hopkins WD. Why Faces May Be Special: Evidence of the Inversion Effect in Chimpanzees. J Cogn Neurosci. 1998;10: 615–622. pmid:9802994
  28. 28. Parr LA, Winslow JT, Hopkins WD. Is the inversion effect in rhesus monkeys face-specific? Anim Cogn. 1999;2: 123–129.
  29. 29. Bradshaw J. Dog Sense: How the New Science of Dog Behavior Can Make You A Better Friend to Your Pet. New York: Basic Books; 2014. Available: https://books.google.com/books?id=FKAVBQAAQBAJ&pgis=1
  30. 30. Miklósi Á. Dog Behaviour, Evolution, and Cognition. Oxford University Press; 2014. Available: https://books.google.com/books?id=VT-WBQAAQBAJ&pgis=1
  31. 31. Horowitz A. Domestic Dog Cognition and Behavior: The Scientific Study of Canis familiaris. Berlin, Heidelberg: Springer Science & Business Media; 2014. https://doi.org/10.1007/978-3-642-53994-7
  32. 32. Kaminski J, Marshall-Pescini S. The Social Dog: Behavior and Cognition. Elsevier; 2014. Available: https://books.google.com/books?id=THRAAwAAQBAJ&pgis=1
  33. 33. Virányi Z, Gácsi M, Kubinyi E, Topál J, Belényi B, Ujfalussy D, et al. Comprehension of human pointing gestures in young human-reared wolves (Canis lupus) and dogs (Canis familiaris). Anim Cogn. 2008;11: 373–87. pmid:18183437
  34. 34. Kirchhofer KC, Zimmermann F, Kaminski J, Tomasello M. Dogs (Canis familiaris), but not chimpanzees (Pan troglodytes), understand imperative pointing. PLOS ONE. 2012;7: e30913. pmid:22347411
  35. 35. Topál J, Kis A, Oláh K. Dogs’ Sensitivity to Human Ostensive Cues: A Unique Adaptation? The Social Dog. 2014. pp. 1–28.
  36. 36. Gácsi M, Miklósi Á, Varga O, Topál J, Csányi V. Are readers of our face readers of our minds? Dogs (Canis familiaris) show situation-dependent recognition of human’s attention. Anim Cogn. 2004;7: 144–153. pmid:14669075
  37. 37. Gácsi M, Gyori B, Miklósi Á, Virányi Z, Kubinyi E, Topál J, et al. Species-specific differences and similarities in the behavior of hand-raised dog and wolf pups in social situations with humans. Dev Psychobiol. 2005;47: 111–122. pmid:16136572
  38. 38. Miklósi Á, Kubinyi E, Topál J, Gácsi M, Virányi Z, Csányi V. A simple reason for a big difference: Wolves do not look back at humans, but dogs do. Curr Biol. 2003;13: 763–766. pmid:12725735
  39. 39. Udell MAR, Wynne CDL. Reevaluating canine perspective-taking behavior. Learn Behav. 2011;39: 318–323. pmid:21870213
  40. 40. Nitzschner M, Melis AP, Kaminski J, Tomasello M. Dogs (Canis familiaris) evaluate humans on the basis of direct experiences only. PLoS One. 2012;7: e46880. pmid:23056507
  41. 41. Bräuer J. What dogs understand about humans. The Social Dog. Elsevier; 2014.
  42. 42. Dilks DD, Cook P, Weiller SK, Berns HP, Spivak M, Berns GS. Awake fMRI reveals a specialized region in dog temporal cortex for face processing. PeerJ. 2015;3: e1115. pmid:26290784
  43. 43. Racca A, Guo K, Meints K, Mills DS. Reading faces: Differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-old children. PLoS One. 2012;7: 1–10.
  44. 44. Guo K, Meints K, Hall C, Hall S, Mills DS. Left gaze bias in humans, rhesus monkeys and domestic dogs. Anim Cogn. 2009;12: 409–418. pmid:18925420
  45. 45. Suberi M, McKeever WF. Differential right hemispheric memory storage of emotional and non-emotional faces. Neuropsychologia. 1977;15: 757–768. pmid:600371
  46. 46. Borod JC, Koff E, Caron HS. Cognitive Processing in the Right Hemisphere [Internet]. Cognitive Processing in the Right Hemisphere. Elsevier; 1983.
  47. 47. Alves NT, Fukusima SS, Aznar-casanova JA, Paulo UDS, Preto R. Models of brain asymmetry in emotional processing. Psychol Neurosci. 2008;1: 63–66.
  48. 48. Ehrlichman H. Hemispheric Asymmetry and Positive-Negative Affect. Duality and Unity of the Brain—Unified Functioning and Specialisation of the Hemispheres. Springer US; 1987. pp. 194–206. https://doi.org/10.1007/978-1-4613-1949-8_13
  49. 49. Watling D, Workman L, Bourne VJ. Emotion lateralisation: Developments throughout the lifespan. Laterality Asymmetries Body, Brain Cogn. 2012;17: 389–411.
  50. 50. Bourne VJ. Chimeric faces, visual field bias, and reaction time bias: have we been missing a trick? Laterality. 2008;13: 92–103. pmid:18050003
  51. 51. Parr LA, Hopkins WD. Brain temperature asymmetries and emotional perception in chimpanzees, Pan troglodytes. Physiol Behav. 2000;71: 363–371. pmid:11150569
  52. 52. Kalin NH, Larson C, Shelton SE, Davidson RJ. Asymmetric frontal brain activity, cortisol, and behavior associated with fearful temperament in rhesus monkeys. Behav Neurosci. 1998;112: 286–292. pmid:9588478
  53. 53. Kendrick KM. Brain asymmetries for face recognition and emotion control in sheep. Cortex. 2006;42: 96–98. pmid:16509115
  54. 54. Thieltges H, Lemasson A, Kuczaj S, Böye M, Blois-Heulin C. Visual laterality in dolphins when looking at (un)familiar humans. Anim Cogn. 2011;14: 303–308. pmid:21140186
  55. 55. Siniscalchi M, Sasso R, Pepe AM, Vallortigara G, Quaranta A. Dogs turn left to emotional stimuli. Behav Brain Res. Elsevier B.V.; 2010;208: 516–521.
  56. 56. Siniscalchi M, Quaranta A, Rogers LJ. Hemispheric specialization in dogs for processing different acoustic stimuli. PLoS One. 2008;3: 1–7.
  57. 57. Quaranta A, Siniscalchi M, Vallortigara G. Asymmetric tail-wagging responses by dogs to different emotive stimuli. Current Biology. 2007. pp. 199–201.
  58. 58. Adolphs R, Jansari A, Tranel D. Hemispheric perception of emotional valence from facial expressions. Neuropsychology. 2001;15: 516–524. pmid:11761041
  59. 59. Rosa Salva O, Regolin L, Mascalzoni E, Vallortigara G. Cerebral and Behavioural Asymmetries in Animal Social Recognition. Comp Cogn Behav Rev. 2012;7: 110–138.
  60. 60. Holmqvist K, Nyström M, Andersson R, Dewhurst R, Jarodzka H, Weijer van de J. Eye Tracking: A comprehensive guide to methods and measures. OUP Oxford; 2011. Available: https://books.google.com/books?id=5rIDPV1EoLUC&pgis=1
  61. 61. Liversedge S, Gilchrist I, Everling S. The Oxford Handbook of Eye Movements. OUP Oxford; 2011. Available: https://books.google.com/books?id=GctzmDb3xmMC&pgis=1
  62. 62. Téglás E, Gergely A, Kupán K, Miklósi Á, Topál J. Dogs’ gaze following is tuned to human communicative signals. Curr Biol. 2012;22: 209–212. pmid:22226744
  63. 63. Somppi S, Törnqvist H, Hänninen L, Krause C, Vainio O. Dogs do look at images: Eye tracking in canine cognition research. Anim Cogn. 2012;15: 163–174. pmid:21861109
  64. 64. Somppi S, Törnqvist H, Hänninen L, Krause CM, Vainio O. How dogs scan familiar and inverted faces: an eye movement study. Anim Cogn. 2013;17: 793–803. pmid:24305996
  65. 65. Adachi I, Kuwahata H, Fujita K. Dogs recall their owner’s face upon hearing the owner's voice. Anim Cogn. 2007;10: 17–21. pmid:16802145
  66. 66. Racca A, Amadei E, Ligout S, Guo K, Meints K, Mills DS. Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris). Anim Cogn. 2010;13: 525–533. pmid:20020168
  67. 67. Huber L, Racca A, Yoon J, Viranyi Z, Range F. The perception of human faces by dogs: Perceptual and cognitive adaptations. J Vet Behav Clin Appl Res. 2014;9: e7–e8.
  68. 68. Nagasawa M, Murai K, Mogi K, Kikusui T. Dogs can discriminate human smiling faces from blank expressions. Anim Cogn. 2011;14: 525–533. pmid:21359654
  69. 69. Müller CA, Schmitt K, Barber ALA, Huber L. Dogs Can Discriminate Emotional Expressions of Human Faces. Curr Biol. 2015;
  70. 70. R Development Core Team R. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. 2014. p. 409.
  71. 71. Fournier DA, Skaug HJ, Ancheta J, Ianelli J, Magnusson A, Maunder MN, et al. AD Model Builder: using automatic differentiation for statistical inference of highly parameterized complex nonlinear models. Optim Methods Softw. Taylor & Francis; 2012;27: 233–249.
  72. 72. Bates D, Maechler M, Bolker BM, Walker S. lme4: Linear mixed-effects models using Eigen and S4. Journal of Statistical Software. 2014. p. xx. http://lme4.r-forge.r-project.org/
  73. 73. Le Grand R, Mondloch CJ, Maurer D, Brent HP. Neuroperception: Early visual experience and face processing. Nature. 2001. pp. 890–890. pmid:11309606
  74. 74. Gliga T, Csibra G. Seeing the face through the eyes: a developmental perspective on face expertise. Prog Brain Res. 2007;164: 323–339. pmid:17920440
  75. 75. Mondloch CJ, Grand Le R, Maurer D. Early Visual Experience Is Necessary For The Development Of Some -But Not All- Aspects Of Face Processing. Dev Face Process Infancy Early Child. 2003; 99–117.
  76. 76. Wobber V, Hare B, Koler-Matznick J, Wrangham R, Tomasello M. Breed differences in domestic dogs’ (Canis familiaris) comprehension of human communicative signals. Interact Stud. John Benjamins Publishing Company; 2009;10: 206–224.
  77. 77. Peterson MF, Eckstein MP. Looking just below the eyes is optimal across face recognition tasks. Proc Natl Acad Sci U S A. 2012;109: E3314–23. pmid:23150543
  78. 78. Kano F, Tomonaga M. Face scanning in chimpanzees and humans: continuity and discontinuity. Anim Behav. 2010;79: 227–235.
  79. 79. de Wit TCJ, Falck-Ytter T, von Hofsten C. Young children with Autism Spectrum Disorder look differently at positive versus negative emotional faces. Res Autism Spectr Disord. 2008;2: 651–659.
  80. 80. Ekman P, Friesen W V. Unmasking the face: A guide to recognizing emotions from facial clues [Internet]. Journal of Personality. Cambridge: Malor Books; 1975. Available: http://psycnet.apa.org/psycinfo/1975-31746-000
  81. 81. Hunnius S, de Wit TCJ, Vrins S, von Hofsten C. Facing threat: infants’ and adults' visual scanning of faces with neutral, happy, sad, angry, and fearful emotional expressions. Cogn Emot. 2011;25: 193–205. pmid:21432667
  82. 82. Butler S, Gilchrist ID, Burt DM, Perrett DI, Jones E, Harvey M. Are the perceptual biases found in chimeric face processing reflected in eye-movement patterns? Neuropsychologia. 2005;43: 52–9. pmid:15488905
  83. 83. Phillips ML, David AS. Viewing strategies for simple and chimeric faces: an investigation of perceptual bias in normals and schizophrenic patients using visual scan paths. Brain Cogn. 1997;35: 225–38. pmid:9356163
  84. 84. Lee E, Kang JI, Park IH, Kim J-J, An SK. Is a neutral face really evaluated as being emotionally neutral? Psychiatry Res. 2008;157: 77–85. pmid:17804083
  85. 85. DeKosky ST, Heilman KM, Bowers D, Valenstein E. Recognition and discrimination of emotional faces and pictures. Brain Lang. 1980;9: 206–214. pmid:7363065
  86. 86. Fried I, MacDonald KA, Wilson CL. Single neuron activity in human hippocampus and amygdala during recognition of faces and objects. Neuron. 1997;18: 753–765. pmid:9182800