Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures

  • Thierry Chaminade ,

    tchamina@gmail.com

    Affiliations Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom, Mediterranean Institute for Cognitive Neuroscience (INCM), Aix-Marseille University – CNRS, Marseille, France

  • Massimiliano Zecca,

    Affiliations Institute for Biomedical Engineering, Consolidated Research Institute for Advanced Science and Medical Care (ASMeW), Waseda University, Tokyo, Japan, Humanoid Robotics Institute (HRI), Waseda University, Tokyo, Japan, Italy-Japan Joint Laboratory on Humanoid and Personal Robotics “RoboCasa”, Tokyo, Japan

  • Sarah-Jayne Blakemore,

    Affiliation University College London Institute of Cognitive Neuroscience, University College London, London, United Kingdom

  • Atsuo Takanishi,

    Affiliations Institute for Biomedical Engineering, Consolidated Research Institute for Advanced Science and Medical Care (ASMeW), Waseda University, Tokyo, Japan, Humanoid Robotics Institute (HRI), Waseda University, Tokyo, Japan, Italy-Japan Joint Laboratory on Humanoid and Personal Robotics “RoboCasa”, Tokyo, Japan, Department of Modern Mechanical Engineering, Waseda University, Tokyo, Japan

  • Chris D. Frith,

    Affiliations Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom, Center of Functionally Integrative Neuroscience (CFIN), Aarhus University Hospital, Århus, Denmark

  • Silvestro Micera,

    Affiliations Italy-Japan Joint Laboratory on Humanoid and Personal Robotics “RoboCasa”, Tokyo, Japan, Advanced Robotics Technology and Systems Laboratory (ARTS Lab), Scuola Superiore Sant'Anna, Pisa, Italy, Neuroprosthesis Control Group, Institute for Automation, Swiss Federal Institute of Technology Zurich (ETHZ), Zurich, Switzerland

  • Paolo Dario,

    Affiliations Italy-Japan Joint Laboratory on Humanoid and Personal Robotics “RoboCasa”, Tokyo, Japan, Advanced Robotics Technology and Systems Laboratory (ARTS Lab), Scuola Superiore Sant'Anna, Pisa, Italy

  • Giacomo Rizzolatti,

    Affiliations Dipartimento di Neuroscienze, Sezione di Fisiologia, Università di Parma, Parma, Italy, Italian Institute of Technology (IIT), Brain Center for Social and Motor Cognition, Parma, Italy

  • Vittorio Gallese,

    Affiliations Dipartimento di Neuroscienze, Sezione di Fisiologia, Università di Parma, Parma, Italy, Italian Institute of Technology (IIT), Brain Center for Social and Motor Cognition, Parma, Italy

  • Maria Alessandra Umiltà

    Affiliations Dipartimento di Neuroscienze, Sezione di Fisiologia, Università di Parma, Parma, Italy, Italian Institute of Technology (IIT), Brain Center for Social and Motor Cognition, Parma, Italy

Abstract

Background

The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents.

Methodology

Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted.

Principal Findings

Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance.

Conclusions

Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions.

Significance

Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions.

Introduction

Most industrialized countries are aging fast due to an increase of life expectancy and a reduction of child birth rate [1]. In this aging society, it is expected that there will be a growing need for home, medical and nursing care services [2]. For this purpose, robots, and in particular robots with appearance based on the human body, are expected to perform human tasks such as provide personal assistance, social care for the elderly or cognitive therapy [3], and be used in entertainment and education. Just as over the last 30 years the computer business has become an integral part of our daily life, so is robotic technology expected to follow a similar development in the near future [4].

These prospects bring into consideration issues related to natural social interactions with these artificial agents. To become part of our everyday environment, personal robots need to be capable of smooth and natural interactions with humans. It has been proposed [5] that consumer product humanoids should be designed to balance human-ness (to facilitate social interaction) and robot-ness (to avoid false expectations about the robots' abilities). Already several robots have been developed to investigate the socio-emotional aspects of human-robot interactions: animaloid robots like the therapeutic robot PARO [6] and SONY AIBO [7] elicit emotional attachment; humanoid robots like Honda ASIMO [8] and Kawada HRP-2 [9] cooperate with humans; android robots like Actroid [10] and Geminoid [11] explore face-to-face interactions.

The humanoid robot WE4-RII (Waseda Eye No.4 Refined II) was designed to expresses human-like emotions [12] in order to improve the social competence of human-robot interactions [13]. The current study was designed to assess how the neural substrates involved in the perception of human emotions respond to the same gestures impersonated by this anthropomorphic yet clearly mechanical robot, in an endeavour to describe how the agent's appearance modulates brain responses to the perception of emotional facial actions. This research is theoretically grounded in the hypothesis that resonance is pivotal in natural human social interactions [14], [15], [16]. Resonance describes the mechanism by which the neural substrates involved in the internal representation of actions, as well as emotions and sensations, are also recruited when perceiving another individual experiencing the same action, emotion or sensation. While this hypothesis can be traced back as far as William James [17], its interest has been renewed by the discovery of ‘mirror neurons’ in the ventral premotor cortex of the macaque monkey [18], [19]. Mirror neurons fire both when monkeys perform a goal-directed action and when they perceive (see or hear) or infer the same action performed by an experimenter [18], [20]. Neuroimaging studies have identified brain regions, in premotor and parietal cortices [21], [22], [23], in which action execution and observation overlap in the human brain (for review see [24]). The ventral premotor cortex, in particular, constitutes a major locus of motor resonance in humans [24]. Furthermore, the somatosensory cortex responds to the observation and feeling of touch [25], [26], [27], and the insula responds to the observation and feeling of disgust [28]. These examples support a generalization of resonance to multiple domains of cognition including emotions [29], [30].

Artificial agents such as the humanoid robot used in this experiment can participate to a better understanding of factors affecting this resonance, and in particular the role of anthropomorphism. Neuroimaging experiments comparing the observation of humans to artificial agents have yielded mixed results in the inferior premotor and posterior parietal regions of the human motor resonance mechanism. In a PET study, the left ventral premotor activity found in previous experiments of action observation responded to human, but not robot, actions [31]. However, a more recent fMRI study indicated that motor resonance is elicited by a robotic arm and hand [32]. While activity in a neural marker of motor resonance was not significantly related to the anthropomorphism of computer-animated avatars, it decreased with the bias to perceive their actions as biological [33], raising questions about the interaction between perceptual processes related to anthropomorphism, and subjective perception of artificial agents' actions as natural. To address this question, we investigated whether facial emotions expressed by a humanoid robot activate brain regions involved in the perception of human emotions, in particular those engaged in motor and emotional resonance. We used the humanoid robot WE-4RII (Waseda Eye No.4 Refined II), developed by Takanishi Laboratory at Waseda, to express emotions by using facial expressions and the movement of the upper-half of the body including neck, shoulders, trunk, waist, as well as arms and hands [12], [34]. Short videos of the humanoid robot and human actors expressing three emotions (Joy, Anger, Disgust), and silent speech were presented to participants, who were asked to rate either the emotional content or the motion, in order to orient their attention either explicitly to the mental state conveyed by the gesture, or to a purely visual feature, thus privileging an implicit processing of the intentional gesture. On the basis of the mechanical appearance of the anthropomorphic robot, we hypothesized a reduced activity in brain regions involved in motor (ventral premotor and inferior frontal gyrus) and emotional (in particular amygdala and insula) resonance during the observation of the robotic agent compared with the observation of a human agent.

Methods

Participants

13 right-handed participants (4 males; aged 29.4+/−7 years) with no history of neurological disorder and normal or corrected-to-normal vision gave their informed consent in writing to take part in this experiment. The study was approved by UCL National Hospital for Neurology and Neurosurgery and Institute of Neurology joint Ethics Committee.

Stimuli

The humanoid robot used in this experiment, WE-4RII, has 59 degrees of freedom (DOFs), 26 of which are specifically used for controlling facial expression (eyebrows: 8; eyelids: 6; eyes: 3; lips: 4; jaw: 1; neck: 4). A subset of the facial Action Units (AU, described in [35]) was chosen for a simplified but realistic impersonation of the facial gestures used in the experiment - Eyebrows: AU 1, 2, 4; eyelids: AU 7, 42, 43; eyes: AU 5, 6, 43, 44, 45, 46; Mouth: AU 15, 17, 20, 25, 27; Lips: AU 12, 15, 16, 20, 23 [12]. The shoulders have 3 DOFs, plus 2 additional DOFs used for squaring or shrugging gestures. Both the posture and the motion velocity are controlled to realize an effective execution of each gesture.

Stimuli consist of 1.5-second greyscale video clips (38 frames at 25 frames per second) showing the agent face and upper body starting from a neutral pose and depicting one of the following gestures: expression of Joy, of Anger, of Disgust and silent Speech. Two different actors were recorded for human stimuli while two versions of the humanoid robot were obtained by the addition of a wig, and four different versions of each type of stimulus were prepared, leading to a total of 64 different stimuli (4 gestures, 2 agents, 2 versions of each agent, 4 versions of each type of stimulus). The greyscale was digitally modified to match the background luminosity and the overall contrast between the human and robot stimuli (see Figure 1, top). Great care was taken to match the dynamics of the human and robot stimuli pairwise (see Video S1).

thumbnail
Figure 1. Experimental paradigm.

Top: single frame from a Human (left) and Robot (right) Joy stimulus. Middle: organization of an fMRI recording session, showing first, the randomization of the order of the rating blocks (Emotion and Movement) within an acquisition run, then the organization a block starting with a reminder of the instruction (Instr.), and finally the presentation of one stimulus followed by the response screen. Bottom: response screen used in the emotion task (and the motion task between parentheses).

https://doi.org/10.1371/journal.pone.0011577.g001

Experimental paradigm

There was a total of 16 experimental conditions: across the eight types of stimuli defined by four gestures (Joy, Anger, Disgust and Speech) impersonated by two agents (Human, Robot), participants to the experiment were asked, after each stimulus, to rate the emotional content (“How much EMOTION did the face show?”) or the amount of motion in the stimuli (“How much MOVEMENT did the face show?”).

Participants underwent four sessions of fMRI scanning. Each session contained eight blocks, four in which emotion was rated and four in which motion was rated, presented in a fully randomized order. Participants were informed of the object they rated by a one-word description presented for 1.5 second at the onset of each block (“EMOTION” or “MOVEMENT”, see Figure 1). There were eight stimuli presented in each block in a pseudorandomized order so that each stimulus was seen once in each session and twice for each rating over the course of the experiment. Inter-stimuli onsets were jittered based on a normal distribution of mean 4.5 (+/− SD 0.5) seconds. After each stimulus, the participant's rating was recorded using an analogue scale that ranged from “None” to the target emotion (e.g. “Anger”) to rate emotion, and from “None” to “A lot” to rate motion. The direction of the scale was assigned randomly, and at the onset the response bar was located close to the centre of the scale; the participants pressed a left or right key on their keypad to move the response bar towards the left or the right respectively, and released the key when the response bar reached the desired rate. These characteristics were selected to avoid motor preparation of the response prior to the appearance of the response screen. The duration of the response screen was 1.5 seconds. Prior to scanning subjects were trained with a limited subset of stimuli (3 blocks of 3 stimuli) outside the scanner to become acquainted with the response procedure. Presentation of stimuli and recording of participants' responses were carried out using Cogent (http://www.vislab.ucl.ac.uk/CogentGraphics/index.html) running in Matlab 6.5 (MathWorks™) and analysis of ratings using the statistical program SPSS (SPSS Inc.)

fMRI data acquisition

Scanning was performed using a 1.5T Siemens Sonata MRI scanner. High-resolution anatomical images were acquired using a T1-weighted 3D MPRAGE sequence. In each of the four experimental sessions, T2*-weighted, gradient-echo, echo-planar imaging sequence was used to acquire 116 volumes containing 48 slices (2 mm thickness and 1 mm gap) covering the whole brain and cerebellum with an in-plane resolution of 3×3 mm (64×64 matrix, fov 192×192×144 mm3). The sequence was optimized for blood-oxygen-level dependent signal sensitivity in the ventral cortical areas (orbitofrontal, inferotemporal and amygdala regions) by the use of a tilt angle of −30 degrees and negative phase encoding [36]. The first 4 volumes of each time-series were discarded prior to the analysis to allow for T1 equilibrium. Field maps were also acquired to correct for geometric distortions in EPI images caused by magnetic field inhomogeneities [37].

fMRI data analysis

fMRI data were analyzed using SPM5 (http://www.fil.ion.ucl.ac.uk/spm), running in Matlab 6.5 (MathWorks™). Slice timing correction was applied to correct for offsets of slice acquisition. EPI volumes were realigned to the first volume for each subject to correct for interscan movement, and unwarped for static magnetic field inhomogeneities using field maps [37] and for movement-induced inhomogeneities using realignment parameters [38]. The high-resolution structural image was co-registered with the mean image of the EPI series, and stereotactically normalised to the Montreal Neurological Institute (MNI) template using sinc interpolation. The normalisation parameters were applied to the EPI time-series, achieving an anatomically informed normalisation. EPI volumes were finally smoothed using an 8mm isometric Gaussian kernel to account for residual inter-subject differences in functional anatomy [39].

The analysis of the functional imaging data entailed the creation of statistical parametric maps representing a statistical assessment of hypothesized condition-specific effects [40]. A random effects procedure was adopted for data analysis. The 1.5-second response periods, and, separately for each of the 16 experimental conditions, the 1.5-second stimulus periods, were modelled at the subject level. These condition-specific effects were estimated with the General Linear Model, with each condition being defined with a boxcar function convolved with the canonical hemodynamic response function. Low-frequency sine and cosine waves modelled and removed subject-specific low-frequency drifts in signal, and global changes in activity were removed by proportional scaling. Each component of the model served as a regressor in a multiple regression analysis.

The brain response to the human stimuli irrespective of the gesture was investigated by contrasting human stimuli presentation, across the four gestures and the two ratings, against the global mean. The resulting statistical maps were entered in a second-level one-sample t-test. Similarly, brain response to the human stimuli for each gesture was investigated by contrasting human stimuli presentation, for each gesture and across the two ratings, against the global mean, and entering these contrasts in four second-level one-sample t-tests. All contrasts were thresholded at p<0.05 FDR-corrected with an extent threshold of 20 voxels. Anatomical localization was performed using a brain atlas [41] and, when possible, statistical localization relied on probabilistic cytoarchitectonic maps [42]. Other functional attributions relied on comparisons with the literature.

To address specifically the scientific hypothesis, regions responding to the perception of human gestures were further explored to assess their response to robot gestures using a Region Of Interest (ROI) approach. The SPM extension toolbox MarsBar (http://marsbar.sourceforge.net/) was used to extract percentage signal change in 5-mm radius spherical ROI centred on the maximum of the cluster under investigation. Percent signal changes were further analyzed using ANOVA and t-tests implemented in the statistical program SPSS (SPSS Inc.), with a significance threshold of 0.05. Regressions (reported at p<0.05) between percent signal change and emotional ratings of robot and human stimuli were assessed in brain areas responding specifically to single gestures.

Results

Behavioural data

It was shown in a separate experiment [12], and confirmed in preliminary tests with the stimuli used in the present experiment [43], that the robot depictions of the three emotions used in this experiment (Anger, Joy and Disgust) were correctly recognized above chance levels (all >75% correct recognition).

Repeated-measures ANOVA indicated a significant effect of the Agent (F1,12 = 16.1; p = 0.002) and the Gesture (F3,36 = 57.0; p<0.001) on the emotional ratings recorded during the fMRI experiment, as well as a significant interaction between the two factors (F3,36 = 12.2; p<0.001). As expected given the lack of emotions for the gesture Speech, contrasts revealed significantly increased ratings for Joy, Disgust and Anger compared to Speech (p<0.001) irrespective of the agent (see Figure 2). Repeated-measures ANOVAs assessed the effect of Agent on subjects' emotional rating for each gesture separately. Their results indicated significantly higher ratings for human than for robot videos for Anger (F1,12 = 31.0, p<0.001) and Disgust (F1,12 = 7.8, p = 0.02, see Figure 2). Speech was rated as significantly more emotional (i.e. less neutral) for the Robot than the Human videos (F1,12 = 14.7, p = 0.003). Differences between ratings of Joy expressed by Human and Robot were not significant (F1,12 = 1.4, p = 0.262).

thumbnail
Figure 2. Emotional ratings.

Mean (error bar: standard error of the mean SEM) of the percentage ratings of emotional intensity for the four types of gestures depicted by Human (plain color) and Robot (stripes) agents. Emotional ratings are significantly higher for the human in the case of Anger (***: p<0.001) and of Disgust (**: p<0.05) and for the robot in the Speech condition (**: p<0.05).

https://doi.org/10.1371/journal.pone.0011577.g002

fMRI data

Main effect of human stimulus presentation.

The main effect of watching human visual stimuli against the global mean irrespective of the gesture and independent of the rating, yielded bilateral activity in occipital, temporal, parietal and frontal cortices (Table 1). A large cluster (#1, k = 4001 voxels) extended from extrastriate cortices to ventral and lateral temporal cortices bilaterally and to the inferior parietal lobule in the right hemisphere. Extrastriate maxima were attributed to Brodmann areas 17 and 18 bilaterally as well as to the right hemisphere functional areas V3v, V4 [44] and V5 [45]. In the right temporal cortices, maxima were reported at the junction between the occipital and temporal lobes, a region responding to the perception of faces (MNI coordinates 42, −68, −6, compared to 43, −67, −9 in [46]) referred to as the lateral face area (LFA) hereafter (see also [47]), in the fusiform gyrus at the vicinity of the fusiform face area, or FFA, (MNI coordinates 42, −62, −20 compared to 40, −56, −15 in [48]), and in the posterior superior temporal gyrus (MNI coordinates 58, −36, 10 compared to 50, −34, 4 in [49]). In the left hemisphere, clusters were found in V3v (#2), V4 [44] and V5 [50] (#3), as well as in the left-hemisphere FFA (MNI coordinates −34, −62, −18 compared to −35, −64, −16 in [48]), but not in the lateral temporal cortex.

thumbnail
Table 1. Main effect of the human stimuli presentation (p<0.05 FDR-corrected, extend k>20; clusters are ordered by cortical lobes, then decreasing z coordinate).

https://doi.org/10.1371/journal.pone.0011577.t001

Extracted signal changes collapsed across the 4 gestures, in 5-mm radius spheres centred on the maxima localized in V3v, V4, V5 and FFA bilaterally as well as in LFA and STS in the right hemisphere were submitted to 2 (Agent) by 2 (Rating) repeated measures ANOVA. Results illustrated in Figure 3 illustrate the significant effect of Agent in all ROI but the STS, corresponding to an increase of the response to Robot compared to Human agents (V3v and V4 bilaterally p<0.001; V5, FFA bilaterally and right LFA p<0.05), without significant effect of the object of Rating (all p>0.05) nor a significant interaction between Agent and the Rating. There were no significant effects of Agent or Rating nor an interaction between Agent and Rating (all p>0.1) in the right STS.

thumbnail
Figure 3. Occipital cortices.

Top: Main effect of human stimuli presentation (FDR-corrected p<0.05, extend k>20) overlaid on a standard brain, seen from the back (middle), back-left (left) and back-right (right). Bottom: Bar graphs on the left give percent signal change (error bar: SEM) in response to the presentation of Human (plain colour) and Robot (stripes) stimuli irrespective of the task and action depicted. Coloured arrows indicate the position of the maxima (see also Table 1) used to represent the functional areas (see text for details). Brackets indicate whether signal change significantly differs between human and robot stimuli (*** p<0.001, ** p<0.05, * p<0.1).

https://doi.org/10.1371/journal.pone.0011577.g003

Main effect of human stimulus presentation: frontal cortices.

Because of our a priori hypothesis on the role of inferior frontal cortices in motor resonance, percent signal change was extracted in 5mm radius spheres centred on the maxima of inferior frontal gyrii activated clusters, localized in three Brodmann areas (BA) according to the cytoarchitectonic probabilistic maps [51]: BA 6 in the right hemisphere, and bilateral BAs 44 and 45, located in the vicinity of clusters reported during the perception of a human face performing intransitive mouth gestures [21]. Signal extracted in these ROIs, collapsed across the 4 gestures, was submitted to 2 (Agent) by 2 (Rating) repeated measures ANOVAs (Figure 4). There was no significant main effect or interaction (all p>0.5) affecting signal in the right BA6. In the left BA44, there was a significant interaction between Agent and Rating (p = 0.02), with no main effect of Agent (p = 0.4) or Rating (p = 0.8). Paired t-tests revealed that response to the robot was not significantly affected by the Rating, while response to human stimuli was significantly increased for the Movement compared to Emotion rating (p = 0.04). A similar profile in the right BA44 did not reach significance (all p>0.1).

thumbnail
Figure 4. Inferior frontal cortices.

Top: Main effect of human stimuli presentation (FDR-corrected p<0.05, extend k>20) overlaid on a standard brain, seen from front-left (left) and front-right (right) with cut-outs showing the bilateral inferior frontal gyrii clusters investigated. Bottom: Bar graphs on the left give percent signal change (error bar: SEM) in response to the presentation of Human (plain colour) and Robot (stripes) stimuli during explicit (E) and implicit (I) tasks irrespective of the action depicted. Coloured arrows indicate the position of the maxima used to represent the functional areas (see text for details). Brackets indicate whether significant effects revealed by ANOVAs and paired t-test (** p<0.05, * p<0.1).

https://doi.org/10.1371/journal.pone.0011577.g004

In the left BA45, there is a significant effect of Rating (p = 0.05), and a trend in the interaction between Rating and Agent (p = 0.06), with no main effect of the Agent (p = 0.8). As with BA44, a similar profile in the right hemisphere BA45 did not reach significance (all p>0.1). The only significant t-test showed that signal change for robot stimuli was significantly increased during rating of the emotional content of the stimulus compared to its motion (left p = 0.01, note than on the right p = 0.1). The same contrast did not reach significance for human stimuli.

Action-specific brain responses.

Brain response to human stimuli was investigated for the four gestures independently at the second level to isolate brain areas responding to individual gestures (Table S1). Areas responding specifically to each of the four types of facial action against the global mean are provided in Table 2 and illustrated on Figure 5. The left inferior frontal gyrus activity associated with perception of Speech gestures was localized in Pars Triangularis, and attributed to Brodmann area 44 [51]. Its location falls into in a subdivision of Broca's region putatively involved in syntactic aspects of speech execution and perception (reviewed in [52]). A similar region was reported for the auditory perception of language coordinates (−46, 12, 24 compared to −40, 14, 28 in [53]). In the present experiment, this area responded to the perception of human speech gestures and was not found in the other types of action, supporting the specificity of its response to language-related actions. Signal change for Speech stimuli extracted in a 5-mm sphere centred at −46, 12, 24 was submitted to 2 (Agent) by 2 (Rating) ANOVA. There is a significant effect of Agent (p = 0.05) corresponding to increased signal to human compared to robot stimuli. There was a trend (p = 0.09) towards increased response when rating emotion compared to movement.

thumbnail
Figure 5. Action-specific responses.

Top: Cut-outs showing clusters responding to each type of action (FDR-corrected p<0.05, extend k>20) overlaid on a standard brain. Bottom: Bar graphs on the left give percent signal change (error bar: SEM) in response to the presentation of Human (plain colour) and Robot (stripes) stimuli for the corresponding action irrespective of the task. Arrows indicate the position of the maxima used to represent the functional areas (see Table 2 and text for details). Brackets indicate whether significant effects revealed by ANOVAs and paired t-test(** p<0.05, * p<0.1).

https://doi.org/10.1371/journal.pone.0011577.g005

thumbnail
Table 2. Main effect of the human stimuli for one type of action only (p<0.05 FDR-corrected, extend k>20) and used in subsequent investigation.

https://doi.org/10.1371/journal.pone.0011577.t002

The left anterior insula, a mirror region for this emotion (−30, 22, 4 compared to −34, 28, 6 in [28]) was associated with the perception of Disgust gestures. In the ROI associated with this activity, only the main effect of agent showed a trend (p = 0.1), corresponding to an increased response to human expressions of disgust compared to robot's expression of the same emotion (paired t-test p = 0.1). There was no significant effect of the object of Rating, or correlation between emotional rating and activity in this ROI.

The right orbitofrontal cortex was associated with the perception of human expression of Anger. Repeated measure ANOVA indicated a significant main effect of Agent (p = 0.01) in the signal extracted in this region, corresponding to an increased response to human compared to robot stimuli. In addition, one-sample t-test reveals that response to the robot's expression of anger in this region was not significantly different from the global mean (p = 0.3).

Finally, the right putamen, part of the ventral striatum associated with the perception of human gestures of Joy, was the only non-cortical region reported in this section. There was no significant main effect of Agent or Rating on the signal extracted in the putamen, but a trend (p = 0.1) towards an increase of response to human compared to robot stimuli. There was a significant correlation between extracted percent signal change during perception of human stimuli of joy and the emotional rating (R2 = 0.461, p = 0.04), but not for robot stimuli of joy (R2 = 0.174, p = 0.16). No other correlations between action-specific brain regions and emotional ratings were significant for the human or the robot stimuli.

Discussion

In the current fMRI study, participants observed short videos depicting emotional (Anger, Joy and Disgust) or emotionally neutral (Speech) facial gestures expressed by real humans or by the robotic humanoid platform WE-4RII, designed to resemble a human face. WE-4RII can reproduce a subset of the facial Action Units [35], by movements of its eyebrows, eyes, eyelids, lips, mouth, neck, shoulder and upper torso, so as to express in a recognizable manner the four gestures used in this experiment [12] while at the same time being perceived as an artificial, i.e. non-human and non-intentional, embodied agent.

Analysis of the ratings of the emotional content by the participants of the current experiment (see Figure 2) indicated that emotional gestures were perceived as more emotional (and the emotionally neutral speech gestures, less emotional) when expressed by the humans than by the robot. The use of stimuli derived from this robotic platform in an fMRI experiment provided a unique opportunity to test whether the reduction of perceived emotionality of the artificial agent is associated with reduced activity in brain areas involved in the feeling or the perception of the same emotions depicted by human agents. Note that because the robot is clearly mechanical compared to human actors, it is not possible to dissociate, in the present experiment, differences in activity related to the appearance and to the artificial nature of the robot. In addition, stimuli were grouped into fMRI blocks during which participants were asked to rate either the emotional content or the movement depicted, as a proxy to orient their attention either towards the intention underlying the gestures (the emotion) or toward a purely visual feature of the stimuli (the amount of movement) so that processing of the mental state causing the action (the emotion being displayed in Joy, Anger and Disgust, the will to communicate in Speech) is implicit [54]. This manipulation was chosen to disentangle bottom-up processes, influenced by the nature of the stimuli, and top-down processes, influenced by the instruction to attend the emotion or the motion of the stimulus [54].

fMRI analysis consisted of, first, isolating regions of interest on the basis of their response to human stimuli, and second, assessing the modulation of their activity by the agent depicting the gestures and by the object of attention. Discussion of the data focuses on regions of the visual association areas in the occipital and temporal cortices involved in the perception of faces and objects; regions found to be specifically associated with the perception of the different types of basic emotions, insula for disgust, putamen for joy and orbitofrontal cortex for anger, and silent speech in the left inferior frontal cortex; and the inferior frontal cortices, which were predicted on the basis of their contribution to motor resonance.

Visual cortices

Responses to human stimuli are reported in visual areas V3, V4 [44] and V5 [50], and in temporal areas responding to the perception of faces (fusiform face area FFA [48], lateral face area LFA [46]) and actions (superior temporal gyrus, [49]). Activations in these occipital and posterior temporal cortices when perceiving human gestures was predicted on the basis of their essential role in visual perception of biological motion and body parts.

In terms of the effect of robotic stimuli on activity in occipital and posterior temporal visual cortices, the main finding was that all regions, with the notable exception of the superior temporal gyrus cluster, showed an increased response for robot compared with human stimuli. This increase appears at odd with their proposed human face-specificity [55] of FFA bilaterally and right LFA. Already, a bilateral fusiform gyrus activity was reported in response to animal faces depicting actions [56]. Another fMRI study found similar responses when perceiving human faces and animals with or without faces in the same fusiform region [57], suggesting that perception of animals relies on the same substrates of perception of human faces.

Explaining this increased response to the robot's face entails discussing mechanisms involved in the domain-specificity of perception in the FFA. Face perception is holistic [58], and deficits of prosopagnosic patients support that the FFA is crucial for this holistic perception [59]. According to Pinker [60], a perceptual process must be characterized by the type of geometry it pays attention to, and the geometry the human face recognition system is sensitive to can be demonstrated in newborns [61]. Pinker argues any object that shares these geometric features, as the robotic face used here, will be automatically processed by the “face module”. This automatic processing might explain activity in the FFA bilaterally and in the right LFA normally activated by human gestures in response to robot stimuli.

It has been proposed that in the FFA, features of the presented face are compared to an average “face template” [55], [62]. Because the robot face was clearly distinguishable from a human face, this comparison could lead to a reduction of signal, as was the case for the perception of animals [57] or of cartoon faces [63]. Alternatively, this comparison could require additional processing of the visual input in order to recognize the robot as a face. This interpretation is supported by the significant increase of response in extrastriate areas V3, V4 and V5, implied in the processing of low-level aspect of visual stimuli such as form, colour and motion. Furthermore, a similar increase of response has been reported in the visual word form area of the ventral occipital cortex when the visual appearance of a written word is degraded, Altogether, increased response to robot compared to human gestures in visual areas implicated in the perception of faces and actions is likely to reflect additional processing of the unfamiliar stimulus [64].

There is no significant difference in responses to robot and human stimuli in the right superior temporal gyrus. The posterior temporal cortex responds to a large range of stimuli. It is particularly respondent to visual depictions of actions across a variety of presentations (full body or body parts actions [65], point-light displays [66], as well as animal actions [56] and scripted geometrical shapes movements [67]). The finding of a similar response to robot and to human stimuli in this region argues in favour of a fully integrated representation of gestures, as both types of stimuli are similar in most respects but the appearance of the agent depicting the gesture.

Regions responding to only one type of human gesture

Aside the occipital and temporal regions involved in processing all gestures, some brain areas respond only to one of the human gestures used in this experiment. We are particularly interested in regions known to be involved in the processing (either in execution or in perception) of the specific gesture they were found associated with, namely the insula for disgust and Broca's region for speech.

Activity in the left insula was predicted on the basis of its participation in emotional resonance during the perception of disgust gestures [28]. The short insular gyrus cluster associated with the perception of disgust gestures (−30, 22, 4) was in the vicinity of a left anterior insula cluster in which overlap between observation and feeling of disgust has been reported [28]. This region was activated in response to the humanoid robot's expression of disgust in comparison to baseline, and the trend showing a reduction of its response in comparison to human stimuli did not reach significance (p = 0.1). This finding demonstrates emotional resonance towards an anthropomorphic robot in the case of disgust gestures.

Perception of human joy was associated with activity in the right putamen, a brain area repeatedly associated with the induction of happy mood (see meta-analysis in [68]). This can be attributed to its role in reward-processing [69] following the suggestion that dopaminergic signalling in these regions is important to elicit internal rewarding response [70]. Such interpretation supports its involvement in the emotional resonance for Joy. As was the case for the insular cluster associated with Disgust, results indicated that there was a trend towards decreased response to robot compared to human stimuli. In addition, the correlation between emotional ratings and brain activity, significant for human stimuli, was not significant in the case of robot stimuli. Altogether, our data support a reduced emotional resonance towards robotic expressions of Joy in the striatal structure, extending the results from Disgust to a non-cortical area.

The involvement of the orbitofrontal cortex in emotions has been demonstrated by lesion studies in humans [71]. The right orbitofrontal region found here has already been shown to respond to angry faces [72]. Activity was significantly larger in the OFC for human than for robot angry gestures, and the response to robot stimuli was not significantly different from the baseline, suggesting that response of this region was limited to human stimuli. An explanation based on the large difference in perceived emotion of the two agents depicting anger (see Figure 2) can be excluded by the absence of significant correlation between orbitofrontal activity and emotional ratings for either agent. An alternative explanation, according to which the orbitofrontal cortex is involved in top-down aspects of emotional evaluation [73] is contradicted by the absence of effect by the manipulation of attention through rating instructions. The absence of significant response to robot stimuli might result from the role of the orbitofrontal cortex in social cognition. Orbitofrontal lesions have been associated with disinhibited social behaviours, putatively by lack of anticipation of their negative outcomes [74]. We suggest that because of its clearly artificial nature, the robot did not elicit a desire for social contact [75] sufficient to be reflected in orbitofrontal activity. Further investigations including socially rewarding interactions with artificial agents, for example interactions with androids [11], will be necessary to confirm this interpretation.

A cluster associated with the perception of human speech only was attributed to Brodmann area 44 [51], a part of Broca's region associated with speech. This activation was similar to clusters reported for auditory [53], visual [76] and visuo-auditory [77] processing of speech. More generally, Broca's region involvement in language production and comprehension [52] supports a role of motor resonance in the domain of speech perception that was hypothesized prior to the discovery of mirror neurons as the “motor theory of speech perception” [78]. Activity in this region was reduced when speech was impersonated by the humanoid robot, compared with human agents, but significantly activated compared to baseline, suggesting robot stimuli elicited reduced motor resonance compared to human stimuli. In contrast to the inferior frontal activities described in the next section, the absence of a significant interaction between Agent and Rating suggests that this reduced activity was caused by the unrealistic appearance of the humanoid robot.

Inferior frontal cortices

The inferior frontal gyrii and ventral premotor cortices were scrutinized because of their involvement in motor resonance, important for the perception of actions, and by extension of emotions, expressed by facial [79] and body [80] gestures. Five clusters were isolated, in the left lateral premotor cortex (BA 6), and bilaterally in the posterior (BA 44) and anterior (BA45) pars triangularis of the inferior frontal gyrus. This region of the cortex, which has been implicated in the perception of human actions [56] and imitation [22], [81], is likely homologous to frontal regions responding to action observation in macaque monkeys [24], [82].

The agent displaying the emotion had no effect on activity in these regions of interest, in keeping with the responses to the observation of human and robot [32], [83] hand actions that have been reported in this region. Both previous studies and in the present experiment, mechanical robot effectors, respectively a “hand” and a “face”, were clearly associated with a bilateral increase of activity in the inferior frontal cortex, with no significant difference in activity between the robotic and human agents. This supports that motor resonance is recruited irrespective of the agent executing the action. Even point light displays of human body motions evoke motor resonance within Broca's region [84]. Mere resemblance of the body shape is thus sufficient to elicit motor resonance: while mirror neurons in monkeys have been reported anecdotally to respond to conspecifics' actions, most of their recordings have been made when monkeys observed human actions; while there is a generic correspondence between the body shapes and degrees of freedom of the two species, the match is not perfect, implying that mirror neurons can generalize across species. Human neuroimaging experiments presenting human, monkey and dog facial movements suggest that even for the less anthropomorphic agent, the dog, motor resonance can be observed provided the action is part of the observer motor repertoire [biting in contrast to barking; 56]. Recent results using robots, including the present data, support that motor resonance generalizes to anthropomorphic artefacts [32], [83].

This conclusion is consistent with behavioural experiments investigating motor resonance, that demonstrated that the observation of humanoid, but not industrial, non anthropomorphic, robotic gestures [85] cause a motor interference effect [86]. In another line of research using hand action imitation, both real and robotic hands had an action priming effect [87].

In both BA44 and BA45 of the left hemisphere, an interaction between the effect of Agent and of Rating was identified, with a main effect of rating in BA45 corresponding to increased response when attention was explicitly directed towards the emotion. BA44 response to the robot was not influenced by the object of attention, while response to human increased when attention was directed towards the gesture's movement compared to its emotion. In contrast, response of the anterior BA45 to human stimuli was not influenced by the direction of attention, but response to robot stimuli was increased when participants were required to rate the emotion of the stimuli, compared to its movements.

Altogether, these results suggest a modulatory influence of task on the activity of both left inferior frontal areas. One interpretation of our results is the preference for representation of actions' intentions in BA45 [24], similar to the response to abstract actions in the more rostral region of macaque monkey's arcuate sulcus [82]. The main effect of rating in the current experiment corroborated BA45's preference for the representation of intentions underlying the depicted gestures when attention is explicitly directed towards emotion. The pattern of activity in BA45 could thus be explained by the interaction between bottom-up and top-down processes. Bottom-up processes of intention understanding could be automatic for human stimuli, and therefore not sensitive to modulation by attention. In contrast, because the system has no prior representation of robots' actions, robot stimuli would not be processed automatically. Response to robot stimuli would be modulated by the object of attention: stimuli would be processed as intentional actions when the task required assessing the emotion, but as artefact movements when the task did not require processing the emotion. The interaction between Task and Agent in BA45 could thus derive from an interaction between bottom-up processes, influenced by the nature of the agent, and top-down processes, depending on the object of attention.

Conclusion

Using fMRI, we investigated whether regions responding to human basic facial emotions and silent speech were also activated when a humanoid robot impersonated the same gestures. While robot stimuli elicited larger responses in occipital and posterior ·temporal areas, a reverse pattern was observed in regions responding specifically to one type of human gesture only, namely the left inferior frontal cortex for motor resonance in speech perception and insula for emotion resonance in disgust. We suggest that the clearly artificial appearance of the humanoid robot used in this experiment, WE-4RII, together with the limited number of degrees of freedom available in comparison to a real human, precluded high levels of resonance towards this agent's gestures. While none of the subjects had previous experience with an emotional robot, it is possible that experience leading to the establishment of real relationships with a robot could create a sense of social bonding. Further work should investigate the relation between familiarity with robots and the activity of neural markers of motor and emotion resonance. This first study paves the way for further exploration of perception of robotic actions.

Supporting Information

Table S1.

Main effect of the human stimuli presentation (p<0.05 FDR-corrected, extend k>20, clusters are ordered by cortical lobes, then decreasing z coordinate), provided across the four types of actions and for each action independently. When available, functional localization is based on the anatomy toolbox (Eickhoff et al., 2005), with percentage indicating the probability of the maximum belonging to the designated area. Underlining highlights regions described in Table 2.

https://doi.org/10.1371/journal.pone.0011577.s001

(0.13 MB DOC)

Video S1.

Experimental paradigm for participants in the fMRI experiment (details in main text).

https://doi.org/10.1371/journal.pone.0011577.s002

(0.43 MB MP4)

Acknowledgments

MZ and AT would like to express their gratitude to Okino Industries LTD, STMicroelectronics, Japan ROBOTECH LTD, SolidWorks Corp and the Advanced Research Institute for Science and Engineering of Waseda University. MZ and AT would also like to thank Dr. K. Itoh, Mr. M. Saito, and Mr. Y. Mizoguchi for their help in the preparation of the emotional patterns of the robot, and Prof. H. Kimura for his support to the research.

Author Contributions

Conceived and designed the experiments: TC SJB AT CDF SM GR VG MAU. Performed the experiments: TC. Analyzed the data: TC SJB CDF. Contributed reagents/materials/analysis tools: MZ AT VG MAU. Wrote the paper: TC SJB CDF VG MAU. Initiated the project: AT SM PD GR VG MAU.

References

  1. 1. OECD (2004) Population statistics. Organisation for Economic Co-operation and Development. Available: http://www.oecd.org.
  2. 2. Goldstein RZ, Volkow ND, Wang GJ, Fowler JS, Rajaram S (2001) Addiction changes orbitofrontal gyrus function: involvement in response inhibition. Neuroreport 12: 2595–2599.
  3. 3. Kozima H, Nakagawa C, Yasuda Y (2007) Children-robot interaction: a pilot study in autism therapy. Prog Brain Res 164: 385–400.
  4. 4. Gates B (2007) A Robot in Every Home. Scientific American 1: 58–65.
  5. 5. DiSalvo C, Gemperle F, Forlizzi J, Kiesler S (2002) All robots are not created equal: the design and perception of humanoid robot heads. Proceedings of the conference on Designing interactive systems: processes, practices, methods, and techniques. Available: http://doi.acm.org/10.1145/778712.778756.
  6. 6. Wada K, Shibata T, Saito T, Sakamoto K, Tanie K (2005) pp. 2785–2790. Psychological and Social Effects of One Year Robot Assisted Activity on Elderly People at a Health Service Facility for the Aged.
  7. 7. Fujita M (2004) On activating human communications with pet-type robot AIBO. Proceedings of the IEEE 92: 1804–1813.
  8. 8. Hirai K, Hirose M, Haikawa Y, Takenaka T (1998) The development of Honda humanoid robot. Leuven, Belgium: pp. 1321–1326.
  9. 9. Kaneko K, Kanehiro F, Kajita S, Hirukawa H, Kawasaki T, et al. (2004) pp. 1083–1090. Humanoid robot HRP-2; 2004.
  10. 10. Kokoro Company Ltd (2004) Actroid. Tokyo, Japan. Available: http://www.kokoro-dreams.co.jp/english/index.html.
  11. 11. Ishiguro H, Nishio S (2007) Building artificial humans for understanding humans. Journal of Artificial Organs 10: 133–142.
  12. 12. Itoh K, Miwa H, Matsumoto M, Zecca M, Takanobu H, et al. (2004) pp. 35–36. Various emotional expressions with emotion expression humanoid robot WE-4RII.
  13. 13. Arbib MA, Fellous JM (2004) Emotions: from brain to robot. Trends Cogn Sci 8: 554–561.
  14. 14. Chaminade T (2006) Acquiring and probing self-other equivalencies - using artificial agents to study social cognition. Reading, UK.
  15. 15. Gallese V, Rochat M, Cossu G, Sinigaglia C (2009) Motor cognition and its role in the phylogeny and ontogeny of action understanding. Dev Psychol 45: 103–113.
  16. 16. Gallese V (2003) The manifold nature of interpersonal relations: the quest for a common mechanism. Philos Trans R Soc Lond B Biol Sci 358: 517–528.
  17. 17. James w (1890) Principles of Psychology. New York: Holt.
  18. 18. Gallese V, Fadiga L, Fogassi L, Rizzolatti G (1996) Action recognition in the premotor cortex. Brain 119(Pt 2): 593–609.
  19. 19. Rizzolatti G, Fadiga L, Gallese V, Fogassi L (1996) Premotor cortex and the recognition of motor actions. Cognitive Brain Research 3: 131–141.
  20. 20. Kohler E, Keysers C, Umiltà MA, Fogassi L, Gallese V, et al. (2002) Hearing sounds, understanding actions: action representation in mirror neurons. Science 297: 846–848.
  21. 21. Buccino G, Binkofski F, Fink GR, Fadiga L, Fogassi L, et al. (2001) Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study. European Journal of Neuroscience 13: 400–404.
  22. 22. Iacoboni M, Woods RP, Brass M, Bekkering H, Mazziotta JC, et al. (1999) Cortical mechanisms of human imitation. Science 286: 2526–2528.
  23. 23. Chaminade T, Decety J (2001) A common framework for perception and action: neuroimaging evidence. Behav Brain Sci 24: 879–882.
  24. 24. Rizzolatti G, Craighero L (2004) The mirror-neuron system. Annu Rev Neurosci 27: 169–192.
  25. 25. Blakemore SJ, Bristow D, Bird G, Frith C, Ward J (2005) Somatosensory activations during the observation of touch and a case of vision-touch synaesthesia. Brain 128: 1571–1583.
  26. 26. Keysers C, Wicker B, Gazzola V, Anton JL, Fogassi L, et al. (2004) A touching sight: SII/PV activation during the observation and experience of touch. Neuron 42: 335–346.
  27. 27. Ebisch SJ, Perrucci MG, Ferretti A, Del Gratta C, Romani GL, et al. (2008) The sense of touch: embodied simulation in a visuotactile mirroring mechanism for observed animate or inanimate touch. J Cogn Neurosci 20: 1611–1623.
  28. 28. Wicker B, Keysers C, Plailly J, Royet JP, Gallese V, et al. (2003) Both of us disgusted in My insula: the common neural basis of seeing and feeling disgust. Neuron 40: 655–664.
  29. 29. Rizzolatti G, Fogassi L, Gallese V (2006) Mirrors of the mind. Sci Am 295: 54–61.
  30. 30. Carr L, Iacoboni M, Dubeau M-C, Mazziotta JC, Lenzi GL (2003) Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas. Proceedings of the National Academy of Sciences of the United States of America 100: 5497–5502.
  31. 31. Tai YF, Scherfler C, Brooks DJ, Sawamoto N, Castiello U (2004) The human premotor cortex is ‘mirror’ only for biological actions. Curr Biol 14: 117–120.
  32. 32. Gazzola V, Rizzolatti G, Wicker B, Keysers C (2007) The anthropomorphic brain: the mirror neuron system responds to human and robotic actions. Neuroimage 35: 1674–1684.
  33. 33. Chaminade T, Hodgins J, Kawato M (2007) Anthropomorphism influences perception of computer-animated characters' actions. Soc Cogn Affect Neurosci 2: 206–216.
  34. 34. Itoh K, Miwa H, Matsumoto M, Zecca M, Takanobu H, et al. (2005) pp. 220–225. Behavior model of humanoid robots based on operant conditioning.
  35. 35. Ekman P, Friesen W (1978) Facial Action Coding System: A Technique for the Measurement of Facial Movement. Palo Alto, CA, USA: Consulting Psychologists Press, Inc.
  36. 36. Weiskopf N, Hutton C, Josephs O, Deichmann R (2006) Optimal EPI parameters for reduction of susceptibility-induced BOLD sensitivity losses: a whole-brain analysis at 3 T and 1.5 T. Neuroimage 33: 493–504.
  37. 37. Hutton C, Bork A, Josephs O, Deichmann R, Ashburner J, et al. (2002) Image distortion correction in fMRI: A quantitative evaluation. Neuroimage 16: 217–240.
  38. 38. Andersson JL, Hutton C, Ashburner J, Turner R, Friston K (2001) Modeling geometric deformations in EPI time series. Neuroimage 13: 903–919.
  39. 39. Friston KJ, Ashburner JT, Kiebel S, Nichols TE, Penny WD, editors. (2007) Statistical Parametric Mapping: The Analysis of Functional Brain Images. London, UK: Elsevier.
  40. 40. Friston KJ, Holmes AP, Worsley KJ, Poline JP, Frith CD, et al. (1994) Statistical parametric maps in functional imaging: a general linear approach. Hum Brain Mapp 2: 189–210.
  41. 41. Duvernoy HM (1999) The human brain: surface, blood supply, and three-dimensional anatomy, 2nd edn, completely revised. Wien New York: Springer-Verlag.
  42. 42. Eickhoff SB, Stephan KE, Mohlberg H, Grefkes C, Fink GR, et al. (2005) A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage 25: 1325–1335.
  43. 43. Zecca M, Chaminade T, Umiltà MA, Itoh K, Saito M, et al. (2007) Emotional Expression Humanoid Robot WE-4RII - Evaluation of the perception of facial emotional expressions by using fMRI. pp. 2A1–O10. May 10/12, 2007; Akita, Japan.
  44. 44. Caspers S, Eickhoff SB, Geyer S, Scheperjans F, Mohlberg H, et al. (2008) The human inferior parietal lobule in stereotaxic space. Brain Struct Funct 212: 481–495.
  45. 45. Malikovic A, Amunts K, Schleicher A, Mohlberg H, Eickhoff SB, et al. (2007) Cytoarchitectonic analysis of the human extrastriate cortex in the region of V5/MT+: a probabilistic, stereotaxic map of area hOc5. Cereb Cortex 17: 562–574.
  46. 46. Puce A, Allison T, Asgari M, Gore JC, McCarthy G (1996) Differential sensitivity of human visual cortex to faces, letterstrings, and textures: a functional magnetic resonance imaging study. J Neurosci 16: 5205–5215.
  47. 47. Steeves JK, Culham JC, Duchaine BC, Pratesi CC, Valyear KF, et al. (2006) The fusiform face area is not sufficient for face recognition: evidence from a patient with dense prosopagnosia and no occipital face area. Neuropsychologia 44: 594–609.
  48. 48. Kanwisher N, McDermott J, Chun MM (1997) The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception. J Neurosci 17: 4302–4311.
  49. 49. Grossman ED, Blake R (2002) Brain Areas Active during Visual Perception of Biological Motion. Neuron 35: 1167–1175.
  50. 50. Barnikol UB, Amunts K, Dammers J, Mohlberg H, Fieseler T, et al. (2006) Pattern reversal visual evoked responses of V1/V2 and V5/MT as revealed by MEG combined with probabilistic cytoarchitectonic maps. Neuroimage 31: 86–108.
  51. 51. Amunts K, Schleicher A, Burgel U, Mohlberg H, Uylings HB, et al. (1999) Broca's region revisited: cytoarchitecture and intersubject variability. J Comp Neurol 412: 319–341.
  52. 52. Hagoort P (2005) On Broca, brain, and binding: a new framework. Trends Cogn Sci 9: 416–423.
  53. 53. Higuchi S, Chaminade T, Imamizu H, Kawato M (2009) Shared neural correlates for language and tool use in Broca's area. Neuroreport.
  54. 54. Frith CD, Frith U (2008) Implicit and explicit processes in social cognition. Neuron 60: 503–510.
  55. 55. Kanwisher N (2000) Domain specificity in face perception. Nat Neurosci 3: 759–763.
  56. 56. Buccino G, Lui F, Canessa N, Patteri I, Lagravinese G, et al. (2004) Neural circuits involved in the recognition of actions performed by nonconspecifics: an FMRI study. J Cogn Neurosci 16: 114–126.
  57. 57. Chao LL, Martin A, Haxby JV (1999) Are face-responsive regions selective only for faces? Neuroreport 10: 2945–2950.
  58. 58. Farah MJ, Wilson KD, Drain M, Tanaka JN (1998) What is “special” about face perception? Psychol Rev 105: 482–498.
  59. 59. Duchaine B, Yovel G (2008) Face Recognition. In: Basbaum AI, Kaneko A, Shepherd GM, Westheimer G, editors. The Senses: A Comprehensive Reference. San Diego: Academic Press. pp. 329–358.
  60. 60. Pinker S (1997) How the mind works. New York: Norton.
  61. 61. Morton J, Johnson MH (1991) CONSPEC and CONLERN: a two-process theory of infant face recognition. Psychol Rev 98: 164–181.
  62. 62. Rhodes G, McLean IG (1990) Distinctiveness and expertise effects with homogeneous stimuli: towards a model of configural coding. Perception 19: 773–794.
  63. 63. Jovicich J, Peters R, Koch K, Chang L, Ernst T (2000) Human perception of faces and face cartoons: an fMRI study. Denver, CO.
  64. 64. Cohen L, Dehaene S, Vinckier F, Jobert A, Montavont A (2008) Reading normal and degraded words: contribution of the dorsal and ventral visual pathways. Neuroimage 40: 353–366.
  65. 65. Allison T, Puce A, McCarthy G (2000) Social perception from visual cues: Role of the STS region. Trends in Cognitive Sciences 4: 267–278.
  66. 66. Grossman E, Donnelly M, Price R, Pickens D, Morgan V, et al. (2000) Brain areas involved in perception of biological motion. J Cogn Neurosci 12: 711–720.
  67. 67. Castelli F, Happe F, Frith U, Frith C (2000) Movement and mind: a functional imaging study of perception and interpretation of complex intentional movement patterns. Neuroimage 12: 314–325.
  68. 68. Phan KL, Wager T, Taylor SF, Liberzon I (2002) Functional Neuroanatomy of Emotion: A Meta-Analysis of Emotion Activation Studies in PET and fMRI. Neuroimage 16: 331–348.
  69. 69. Phillips ML, Drevets WC, Rauch SL, Lane R (2003) Neurobiology of emotion perception I: The neural basis of normal emotion perception. Biol Psychiatry 54: 504–514.
  70. 70. Drevets WC, Gautier C, Price JC, Kupfer DJ, Kinahan PE, et al. (2001) Amphetamine-induced dopamine release in human ventral striatum correlates with euphoria. Biol Psychiatry 49: 81–96.
  71. 71. Blair RJ, Cipolotti L (2000) Impaired social response reversal. A case of ‘acquired sociopathy’. Brain 123(Pt 6): 1122–1141.
  72. 72. Blair RJ, Morris JS, Frith CD, Perrett DI, Dolan RJ (1999) Dissociable neural responses to facial expressions of sadness and anger. Brain 122(Pt 5): 883–893.
  73. 73. Wright P, Albarracin D, Brown RD, Li H, He G, et al. (2008) Dissociated responses in the amygdala and orbitofrontal cortex to bottom-up and top-down components of emotional evaluation. Neuroimage 39: 894–902.
  74. 74. Anderson SW, Bechara A, Damasio H, Tranel D, Damasio AR (1999) Impairment of social and moral behavior related to early damage in human prefrontal cortex. Nat Neurosci 2: 1032–1037.
  75. 75. Epley N, Waytz A, Cacioppo JT (2007) On seeing human: a three-factor theory of anthropomorphism. Psychol Rev 114: 864–886.
  76. 76. Santi A, Servos P, Vatikiotis-Bateson E, Kuratate T, Munhall K (2003) Perceiving biological motion: dissociating visible speech from walking. J Cogn Neurosci 15: 800–809.
  77. 77. Skipper JI, Nusbaum HC, Small SL (2005) Listening to talking faces: motor cortical activation during speech perception. Neuroimage 25: 76–89.
  78. 78. Liberman AM, Mattingly IG (1985) The motor theory of speech perception revised. Cognition 21: 1–36.
  79. 79. Schulte-Ruther M, Markowitsch HJ, Fink GR, Piefke M (2007) Mirror Neuron and Theory of Mind Mechanisms Involved in Face-to-Face Interactions: A Functional Magnetic Resonance Imaging Approach to Empathy. Journal of Cognitive Neuroscience 19: 1354–1372.
  80. 80. de Gelder B, Snyder J, Greve D, Gerard G, Hadjikhani N (2004) Fear fosters flight: a mechanism for fear contagion when perceiving emotion expressed by a whole body. Proc Natl Acad Sci U S A 101: 16701–16706.
  81. 81. Koski L, Wohlschlager A, Bekkering H, Woods RP, Dubeau MC, et al. (2002) Modulation of Motor and Premotor Activity during Imitation of Target- directed Actions. Cereb Cortex 12: 847–855.
  82. 82. Nelissen K, Luppino G, Vanduffel W, Rizzolatti G, Orban GA (2005) Observing Others: Multiple Action Representation in the Frontal Lobe. Science 310: 332–336.
  83. 83. Peeters R, Simone L, Nelissen K, Fabbri-Destro M, Vanduffel W, et al. (2009) The Representation of Tool Use in Humans and Monkeys: Common and Uniquely Human Features. J Neurosci 29: 11523–11539.
  84. 84. Saygin AP, Wilson SM, Hagler DJ Jr, Bates E, Sereno MI (2004) Point-light biological motion perception activates human premotor cortex. J Neurosci 24: 6181–6188.
  85. 85. Kilner JM, Paulignan Y, Blakemore SJ (2003) An interference effect of observed biological movement on action. Current Biology 13: 522–525.
  86. 86. Oztop E, Franklin D, Chaminade T, Gordon C (2005) Human-humanoid interaction: is a humanoid robot perceived as a human. Internation Journal of Humanoid Robotics 2: 537–559.
  87. 87. Press C, Bird G, Flach R, Heyes C (2005) Robotic movement elicits automatic imitation. Brain Res Cogn Brain Res 25: 632–640.