Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

MEG Evidence for Dynamic Amygdala Modulations by Gaze and Facial Emotions

  • Thibaud Dumas ,

    thibaud.dumas@upmc.fr

    Affiliations CNRS, UMR 7225, CRICM, Paris, France, Inserm, U 975, Paris, France, Université Pierre et Marie Curie-Paris 6, Centre de Recherche de l′Institut du Cerveau et de la Moelle Epinière (CRICM), UMR_S 975, and Centre MEG-CENIR, Paris, France, CNRS, USR 3246, Centre Emotion, Hôpital Pitié-Salpêtrière, Paris, France

  • Stéphanie Dubal,

    Affiliation CNRS, USR 3246, Centre Emotion, Hôpital Pitié-Salpêtrière, Paris, France

  • Yohan Attal,

    Affiliations CNRS, UMR 7225, CRICM, Paris, France, Inserm, U 975, Paris, France, Université Pierre et Marie Curie-Paris 6, Centre de Recherche de l′Institut du Cerveau et de la Moelle Epinière (CRICM), UMR_S 975, and Centre MEG-CENIR, Paris, France

  • Marie Chupin,

    Affiliations CNRS, UMR 7225, CRICM, Paris, France, Inserm, U 975, Paris, France, Université Pierre et Marie Curie-Paris 6, Centre de Recherche de l′Institut du Cerveau et de la Moelle Epinière (CRICM), UMR_S 975, and Centre MEG-CENIR, Paris, France

  • Roland Jouvent,

    Affiliation CNRS, USR 3246, Centre Emotion, Hôpital Pitié-Salpêtrière, Paris, France

  • Shasha Morel,

    Affiliation CNRS, UMR 7295, CeRCA, Université François-Rabelais, Tours, France

  • Nathalie George

    Affiliations CNRS, UMR 7225, CRICM, Paris, France, Inserm, U 975, Paris, France, Université Pierre et Marie Curie-Paris 6, Centre de Recherche de l′Institut du Cerveau et de la Moelle Epinière (CRICM), UMR_S 975, and Centre MEG-CENIR, Paris, France

Correction

1 Oct 2013: Dumas T, Dubal S, Attal Y, Chupin M, Jouvent R, et al. (2013) Correction: MEG Evidence for Dynamic Amygdala Modulations by Gaze and Facial Emotions. PLOS ONE 8(10): 10.1371/annotation/0613c203-5f8a-4aec-b15d-0324bc5788f8. https://doi.org/10.1371/annotation/0613c203-5f8a-4aec-b15d-0324bc5788f8 View correction

Abstract

Background

Amygdala is a key brain region for face perception. While the role of amygdala in the perception of facial emotion and gaze has been extensively highlighted with fMRI, the unfolding in time of amydgala responses to emotional versus neutral faces with different gaze directions is scarcely known.

Methodology/Principal Findings

Here we addressed this question in healthy subjects using MEG combined with an original source imaging method based on individual amygdala volume segmentation and the localization of sources in the amygdala volume. We found an early peak of amygdala activity that was enhanced for fearful relative to neutral faces between 130 and 170 ms. The effect of emotion was again significant in a later time range (310–350 ms). Moreover, the amygdala response was greater for direct relative averted gaze between 190 and 350 ms, and this effect was selective of fearful faces in the right amygdala.

Conclusion

Altogether, our results show that the amygdala is involved in the processing and integration of emotion and gaze cues from faces in different time ranges, thus underlining its role in multiple stages of face perception.

Introduction

Over the past decades, the amygdala has been highlighted as a key structure in the perception of emotional and social stimuli, such as faces [1], [2]. The implication of the amygdala in socio-emotional processes was first illustrated with the description of the Klüver-Bucy syndrome in monkeys, which was characterized by profoundly abnormal emotional and social behavior following bilateral resection of the temporal lobe [3]. The description of patients with bilateral amygdala lesions then led to the hypothesis of a selective involvement of amygdala in the perception of fear [4], [5]. This view has evolved over the years to support the proposal of the amygdala as a key structure in the appraisal of stimulus relevance [6][8]. Accordingly, the human amygdala is particularly sensitive to faces that are highly relevant stimuli (even neutral faces; e.g. [9], [10]), and it has been shown to be sensitive to various facial signals including facial expressions and eye gaze. More precisely, the amygdala was found to be activated by faces conveying positive as well as negative emotions [11], [12]. It was also demonstrated to be responsive to seen gaze direction, with enhanced activation for neutral faces with direct relative to averted gaze in humans [13], [14], and to be involved in the attention orienting induced by gaze [15]. Moreover, the amygdala has been implicated in the integration of gaze and emotional expression cues from faces [16][20] (see also [21] for a recent discussion on the combined influence of gaze direction and emotion). However, while amygdala involvement in the processing of emotional expression and gaze cues from faces is well established, little is known about the dynamics of the neuronal responses in the human amygdala.

Much of what is known about the involvement of the amygdala in the perception of faces comes from fMRI and PET studies that do not allow unraveling the temporal dynamics of neuronal responses (see [22] for review). Direct intracerebral recordings of electroencephalographic (EEG) signals within the amygdala of epileptic patients have provided some insights onto the time course of amygdala responses to faces. Krolak-Salmon and coll. [23] found an increase of amygdala activity specific of fearful relative to neutral, disgusted, and happy faces between 200 and 800 ms. This effect was observed only when the task involved explicit processing of the emotional expression. In the same line, Meletti and coll. [24] showed amygdala responses selective to the eye region of seen faces between 200 and 400 ms; these responses were increased for fearful relative to neutral and happy expressions in a task where subjects were requested to pay attention to the seen emotion. By contrast, Pourtois and coll. [25] found differentiated amygdala responses to fearful and neutral faces between 140 and 290 ms, which were independent of the attention paid to the faces. Furthermore, Sato and coll. [26], [27] revealed an early increase (∼135 ms) of amygdala oscillatory activity in the gamma range (30–60 Hz) for fearful relative to neutral faces, as well as a later (∼200 ms) response to eye gaze in the same frequency range.

The development of electromagnetic brain imaging methods that allow source localization from non invasive electro- and magneto-encephalographic (EEG/MEG) scalp recordings has offered a new tool for the study of the temporal dynamics of brain responses in healthy subjects [28]. In recent years, an increasing number of studies have used these methods to investigate amygdala responses from MEG recordings in the context of emotion processing. For example, Cornwell and coll. [29] showed amygdala responses to emotional face matching vs. shape matching culminating between 136 and 188 ms after the face onset. In a series of studies using facial emotion recognition and face-versus-object categorization tasks, Streit and coll. [30], [31] and Liu and coll. [32] reported amygdala responses associated with facial emotion recognition between 100 and 220 ms. Another group reported responses to fearful versus neutral faces presented laterally between 80 and 160 ms [33], [34]. In addition, a few studies reported very early amygdala activation to threat-related faces, with activity starting from about 20 ms [35][37].

Although there are now more than 10 published studies in peer-reviewed high-quality journals that brought evidence for the localization of amygdala activity from MEG signals, there are still some debates about the possibility of observing amygdala responses with electromagnetic brain imaging methods. From this perspective, it is interesting to remind that the cortical grey matter is hypothesized to be the principal origin of the magnetic signal because of its organization in macrocolumns of pyramidal cells and its relatively small distance from the MEG sensors [38]. Nevertheless, the activity of the amygdala may also be detected due to its functional and histological properties. As a heterogenic structure, the amygdala is composed of distinct nuclei which differ from each other in their connectivity pattern and functional roles as well as in their histological properties. Among these nuclei, the basolateral nucleus, which is the major communication node between the neocortex and the amygdala, represents half of the total neurons in the human amygdaloid complex [39]. In rat amygdala, ninety-three percents of these neurons have been identified as pyramidal cells – the main sources of the MEG signal from the brain [40]. Although the pyramidal cells in the basolateral nucleus do not show a laminar organisation, it is well possible that the activation of subpopulations of these cells in response to relevant stimuli may produce a non-zero net current dipole source. In support of this view, electrophysiological recordings in the amygdala of non-human primates have isolated different groups of selectively responsive cells, including face-specific neurons as well as neurons selectively activated by threat-related stimuli [41][45] (see also [46]). Moreover, with an average volume of 44.5 mm3 and a composition of 12.2 millions of neurons [39], the human amygdala has a mean density of 272 millions of neurons/cm3. By contrast, the density of neurons in the neocortex is estimated to 44 millions of neurons/cm3 [47]. Thus, the amygdala is six times as dense as the neocortex. Accordingly, we have shown in a simulation study using a growing patch of activation in the amygdala that an activated volume of 0.2–0.3 cm3 was sufficient to generate a magnetic signal above the noise level of MEG sensors [48], [49]. Altogether, this may explain why, although the amygdala may be qualified as a deep brain structure, it may still contribute significantly to the magnetic signals recorded at the scalp surface and be detectable with MEG, as demonstrated in the above mentioned papers (see also [50], [51]).

Here, we wanted to further investigate the time course of amygdala responses to faces using MEG. We combined a distributed Minimum Norm Estimate method for source reconstruction with an anatomical segmentation method developed by Chupin and coll. [52]. This segmentation method allowed us to localize the amygdala in each individual subject and to take into account a volumic grid of dipoles placed within this structure in addition to the sources distributed over the cortical surface. We examined amygdala responses to fearful and neutral faces with direct and averted gaze. Adolphs [53] has proposed that the same brain structure may participate in different components of processing at different points in time. Thus we wanted to investigate whether the amygdala may be sensitive to emotional expression and eye gaze cues in different time intervals. Our hypothesis was that there may be early amygdala responses to the faces, which should differentiate fearful and neutral faces [25], [26], [32], [34]. Following Sato and coll. [27] findings, the effect of seen gaze direction was expected to emerge later. An interesting question was that of the integration of gaze and expression cues: Current anatomo-functional models of face processing postulate relatively late stages of integration of these facial cues (e.g.; [54], [55]) but it remains an open empirical question. In addition, we measured the anxiety level of the participants in order to investigate if anxiety might modulate amygdala responsiveness to emotion and gaze cues, as was previously found in fMRI studies [56][60] (for reviews, see [61], [62]). A recent fMRI study has shown a correlation between state anxiety and the activity in the amygdala and extended amygdala regions related to the integration of gaze and facial expression [63]. Thus, we wanted to test if amygdala activity in response to fearful and neutral faces with direct and averted gaze depended on the participants’ anxiety level, and if anxiety may modulate amygdala activity from the early stages of stimulus processing and/or over sustained time intervals.

Materials and Methods

Participants

Fifteen healthy volunteers took part in this study (11 female, mean age 26.1±3.3 years). All were right-handed, had normal or corrected to normal vision and declared no history of neurological or psychiatric disorders. They provided written informed consent and were paid for their participation. All procedures were approved by the local ethics committee (“Comité de protection des personnes Ile-de-France VI”, CPP Idf VI).

Participants completed the Spielberger State-Trait Anxiety Inventory (STAI) [64]. Participants’ state anxiety scores ranged from 20 to 38 (mean = 26, SD = 5.20). These scores are similar to published norms for this age group.

Stimuli

Faces from 16 different individuals were selected from the Karolinska Directed Emotional Face database [65] under their fearful and neutral frontal view versions. An averted gaze version was obtained for each of these stimuli, by manually modifying the eye positions of the faces, using Adobe Photoshop 7.0.1. This resulted in four experimental conditions in a 2×2-factorial design with emotion (fearful or neutral) and gaze direction (direct or averted) as orthogonal factors. All pictures were set to gray level, resized and cropped to an oval shape. Global luminance and contrast of each stimulus (measured inside the oval shape) were modified using Adobe Photoshop 7.0.1 in order to ensure that there was no significant difference between the experimental conditions (mean grey levels = 60.7±10.8/60.8±10.8/60.3±10.8/60.4±10.7 for neutral faces with direct and averted gaze and fearful faces with direct and averted gaze, respectively; global contrast = 30.8±2.6/30.7±2.6/30.1±2.5/30.1±2.5 for the same conditions). Faces subtended a visual angle of about 6 degrees (vertically) and 3.5 degrees (horizontally).

Procedure

Participants were comfortably seated inside an electromagnetically shielded MEG room in front of a translucent screen placed at a viewing distance of 82 cm. Stimuli were back projected onto the screen through a video projector placed outside of the room and two mirrors inside the MEG room. Stimulus presentation was controlled by a computer equipped with OmniStim, a home-made software, and connected to the MEG data acquisition computer through the parallel port.

Each trial started with a central fixation point displayed for 700 to 900 ms. Then a face stimulus was displayed for 500 ms, followed by a blank screen presented for 1 to 2 seconds before the next trial started (Figure 1). There were six blocks of 64 trials (total = 384 trials). In each block, the 16 different faces were seen once under each of the 4 experimental conditions of gaze (direct/averted) and emotion (fearful/neutral). Eight to twelve target stimuli consisting in a centrally presented blue dot were added to each block for the purpose of the task. The subjects were instructed to press a button whenever they detected these target stimuli. All subjects performed the task with ceiling performance (mean number of target detected, across subjects = 57±1 over the total of 60 targets).

thumbnail
Figure 1. Experimental conditions and trial timeline.

On the left: Example stimuli for the four categories of faces used (fearful and neutral faces with direct and averted gaze). On the right: Illustration of the trial timeline. Each trial started with a fixation point (700–900 ms) followed by the presentation of a face (500 ms), and a blank screen (ISI: 1000–2000 ms) before the next trial started. The participant’s task was to press a button on the occurrence of occasional blue circle targets.

https://doi.org/10.1371/journal.pone.0074145.g001

MEG Recordings

The study took place at the MEG Center of the Centre de Neuro-Imagerie de Recherche (CENIR, CRICM – UPMC/Inserm/CNRS), Paris, France. Magnetoencephalographic signals were collected continuously on a whole-head MEG system with 151 axial gradiometers (Omega 151 CTF Systems, Port Coquitlam, British Columbia, Canada) at a sampling rate of 1250 Hz with a 200 Hz low-pass filter. Seventeen external reference gradiometers and magnetometers were used to apply a synthetic third-gradient to all MEG signals for ambient field correction. Three small coils were attached to reference landmarks on the participant (left and right pre-auricular points, plus nasion) in order to monitor head position and to provide co-registration with the anatomical MRI. Head position was recorded and controlled before each stimulus block. The recording also included the signal of a photodiode that detected the actual appearance of the stimuli on the screen within the MEG room. This allowed correcting for the delay introduced by the video projector (20 ms) and averaging event-related magnetic fields (ERFs) precisely time-locked on the actual onset of the face stimulus. Vertical and horizontal eye movements were monitored through bipolar Ag/AgCl leads placed above and below the subject’s dominant eye, and at the outer canthi of each eye, respectively.

MRI Acquisition

Structural MRI scan was obtained on a Siemens 3T Trio TIM scanner operated in the CENIR, CRICM – UPMC/Inserm/CNRS, Paris, France (MPRAGE sequence, TR: 2300 ms, TE: 4.18 ms, FA: 9°, voxel size: 1×1×1 mm3, sagittal scans, 248×256 voxels/slice, 176 slices).

Event-related Magnetic Fields (ERFs)

Trials contaminated by eye movements, blinks or muscular activity were rejected manually upon visual inspection of the MEG and EOG signals. The mean number of trials included in the ERF averages did not differ across conditions (mean number of trials±SEM averaged: 81.4±2.8, 80.8±3.1, 82.1±2.9 and 80.9±3.0 for fearful faces with direct gaze, fearful faces with averted gaze, neutral faces with direct gaze, and neutral faces with averted gaze respectively; all F(1,14)<1.5, all p>.2). Event-related magnetic fields (ERFs) were then averaged for each condition between −200 ms and +600 ms (0 ms = face onset). Finally, data were baseline corrected according to the 200 ms preceding face onset, and digitally low-pass filtered at 40 Hz.

Source Localization

Forward problem.

For each subject, a tessellated envelope of the neocortex was obtained with The Anatomist/brainVISA free software solutions (http://brainvisa.info). Moreover, the amygdala was segmented from each individual MRI with the recently developed method of [52]. This method is based on a competitive region-growing approach and was operated in the brainVISA environment. It provides tessellated surfaces of the amygdala and hippocampus. Thus, we obtained three tessellated surfaces that were used to distribute the elementary equivalent current dipoles (ECD) of our source imaging model (Figure 2). ECDs were evenly distributed perpendicularly to the surfaces of the neocortex and of the hippocampus, in order to model the macrocolumns of pyramidal cells in these structures. Considering the heterogeneous composition of the amygdala in terms of nuclei and histological aspect, we chose to transform the segmented amygdala surface into a volumic grid. Orthogonal trihedral ECDs were then placed at each node of this volumic grid for each subject [50]. The average number of sources distributed in the amygdala model is summarized in Table 1. Overlapping sphere method was then used to compute the head model for each subject with Brainstorm toolbox [66], using the individual head mesh and sensor locations [67]. Brainstorm toolbox is documented and freely available for download online under the GNU general public license (http://neuroimage.usc.edu/brainstorm). The gain matrices were then obtained for each structure (neocortex, amygdala, and hippocampus) and concatenated.

thumbnail
Figure 2. Illustration of the anatomical segmentation of the amygdala and the hippocampus from the individual T1 MRI scan of a typical subject.

On the left: Amygdala (in green) and hippocampus (in red) segmentation masks obtained with the method of Chupin and coll. (2007) are visualized on a horizontal view of the participant’s anatomical MRI. On the right: Top view of the tessellated surfaces of the amygdala (in green) and the hippocampus (in red) merged with the tessellated cortical surface (obtained with BrainVisa) from the same individual’s MRI scan.

https://doi.org/10.1371/journal.pone.0074145.g002

thumbnail
Table 1. Mean number of sources (±SEM) distributed in the amygdala volume and over the cortical surface across subjects.

https://doi.org/10.1371/journal.pone.0074145.t001

Inverse problem.

For each subject and each condition, the amplitude of the activation of each ECD of the neocortex, amygdala, and hippocampus was estimated at every time point by the ‘Deep Brain Activity’ (DBA) distributed source imaging model [50] that is based on weighted Minimum Norm Estimation [68] computed with the default values of Brainstorm (weighting factor = 0.4; Tikhonov parameter = 10% of the maximum singular value of the lead field).

For the amygdala, the norm of the vector resulting from the trihedral sources was computed at each node of the volumic grid. The mean time course of amygdala activity was then obtained by averaging all vector norms in the amygdala volume, separately for the left and right amygdala, for each subject and each condition.

For the neocortex, the time-resolved individual cortical maps were projected onto the default anatomy of Brainstorm toolbox that consists in the segmented cortical surface (15000 vertices) of the MNI/Colin27 brain [69]. These data were transformed into z-score with respect to the mean and standard deviation of dipole current amplitude during the baseline period and grand averaged across subjects for the purpose of region-of-interest definition.

Data Measurements and Statistical Analyses

We measured the mean amplitude of the amygdala activity in two time ranges selected on the basis of the peaks of activity obtained across subjects: i) between 130 and 170 ms, to encompass the early prominent peak of amygdala response across subjects, and ii) in four consecutive 40-ms time windows between 190 and 350 ms, to encompass the secondary peaks of amygdala observed and that resulted in a sustained response from about 200 ms across subjects. These data were analyzed using analyses of covariance (ANCOVA) with emotional expression (fearful/neutral), gaze (direct/averted), and hemisphere (left/right) as within-subject factors, and the participant’s anxiety score as a covariate. The window of measurement (190–230 ms/230–270 ms/270–310 ms/310–350 ms) was introduced as an additional within-subject factor for the analysis of the mean amplitude of amygdala activity between 190 and 350 ms. Greenhouse-Geisser correction was applied for the comparisons of more than one degree of freedom; the Greenhouse–Geisser epsilon (εGG) value for the adjustment of the degrees of freedom is then reported.

For the neocortex, two regions of interest (ROIs) – where a prominent peak activity was observed in the same time range as in the amygdala – were defined in the ventral and lateral occipito-temporal regions. These ROIs were centered on the maximum of activation observed on the grand averaged z-score normalized cortical maps between 130 and 170 ms. The raw (non-normalized) time series of current dipoles were then extracted from these ROIs (77 and 82 vertices respectively) for each subject and each condition, and the mean current amplitude was computed in each ROI in the same time windows as for the amygdala. These data were analysed using ANCOVAs in the same way as amygdala activity.

Results

As can be seen on Figure 3, the grand mean of event-related magnetic fields (ERFs) at sensor level showed the classical succession of visual ERF components with a first peak at 80 ms post-stimulus onset followed by a prominent peak of magnetic signal at 106 ms (M100), and then the typical M170 pattern to faces – with a main flowing-in field over the right hemisphere and a main flowing-out field over the left hemisphere – that peaked here at 144 ms. We localized the sources of magnetic activity in each subject using individual amygdala volumes in addition to the cortical surface as our source imaging model in order to investigate amygdala responses to fearful and neutral faces with direct and averted gaze.

thumbnail
Figure 3. Event-related magnetic fields (ERFs) in response to faces.

On the top: Maps (top-view of the head) of the ERFs at 80, 106 and 144 ms, averaged across all subjects and conditions. Below: Superimposed time courses of the ERFs over the 151 sensors, averaged across all subjects and conditions.

https://doi.org/10.1371/journal.pone.0074145.g003

Amygdala

The activity extracted from amygdala sources showed a sharp increase from about 70 to 80 ms post-stimulus onset, reaching an early maximum between 100 and 170 ms in every subject, which resulted in an averaged prominent peak of activity at ∼140 ms (Figure 4A). This was followed by secondary peaks of activity reflected in an across-subjects averaged sustained response from about 200 ms. We performed mean amplitude measurements of these amygdala responses to the faces.

thumbnail
Figure 4. Amygdala responses to the faces.

A) Time course of the right and left amygdala responses to the fearful (in red) and neutral (in black) faces with direct (plain line) and averted gaze (dashed line). The amygdala activity averaged across the 15 subjects is presented. The time windows where the mean amplitude of amygdala activity was measured are shaded in grey. B) Plots of the effect of emotion and gaze on amygdala activity between 130 and 170 ms and between 190 and 350 ms. On the left: A main effect of the emotion conveyed by the face was observed between 130 and 170 ms. On the middle and right: A main effect of gaze direction qualified by an interaction with emotion and hemisphere was observed between 190 and 350 ms. This reflected a significantly greater response to fearful faces with direct gaze than to fearful faces with averted gaze and to neutral faces with direct gaze in the right amygdala. On every plot, the error bars represent the standard errors of the means across subjects (SEM). C) Correlation between the amygdala activity and the participants’ anxiety score (STAI). This correlation was observed in both time ranges of amygdala activity measurement (130–170 ms and 190–350 ms).

https://doi.org/10.1371/journal.pone.0074145.g004

First, we analyzed the mean amplitude of amygdala activity between 130 and 170 ms in order to encompass the major part of the early prominent peak response across subjects. The ANCOVA with emotional expression (fearful vs. neutral), gaze direction (direct vs. averted), and hemisphere (left vs. right) as within-subject factors, and anxiety level as a continuous covariate showed a significant main effect of emotion (F(1, 13) = 29.14, P = .0001); this revealed greater mean amplitude of amygdala activity in response to fearful than to neutral faces between 130 and 170 ms (Figure 4B). There was also a main effect of the anxiety covariate (F(1, 13) = 6.08, p<.05), showing that amygdala activity increased with the participant's anxiety level (Figure 4C). The ANCOVA did not reveal any significant interaction between emotion and anxiety level (F<1), suggesting independent effects of these variables. In other words, anxiety level modulated the mean amplitude of amygdala activity, but it had no significant influence on the effect of emotion, between 130 and 170 ms. There was not any other significant main effect or interaction.

Second, in order to get some insight into the temporal unfolding of the following sustained activity, we measured the mean amplitude of amygdala activity in 4 consecutive 40-ms time windows between 190 and 350 ms (190–230 ms/230–270 ms/270–310 ms/310–350 ms). The ANCOVA with emotional expression, gaze direction, hemisphere, and time window as within-subject factors, and anxiety level as a continuous covariate, did not show any significant effect of emotion. Yet there was an interaction between emotion and time window (F(3,39) = 4.23, εGG = 0.64, p<.05), reflecting an increase of the mean amplitude of amygdala activity in response to fearful relative to neutral faces between 310 and 350 ms only (F(1, 13) = 10.83, p<.01). Moreover, there was a significant main effect of gaze direction (F(1, 13) = 7.59, p<.02): the mean amplitude of amygdala activity between 190 and 350 ms was greater for faces with direct gaze than for faces with averted gaze. In addition, the three-way interaction between gaze direction, emotion, and hemisphere was significant (F(1, 13) = 5.11, p<.05). Planned comparisons revealed that there was an increased amygdala response to fearful faces with direct relative to averted gaze in the right amygdala (F(1, 13) = 5.44, p<.05). No such effect of gaze direction was observed for neutral faces (F<1). The three-way interaction also reflected that right amygdala activity between 190 and 350 ms was greater for fearful than neutral faces only when the faces were seen under direct gaze (F(1, 13) = 7.88, p<.02; F<1 for the effect of emotion under averted gaze). By contrast, in the left amygdala, there was only a trend to a main effect of gaze (F(1, 13) = 4.11, P = .06), revealing globally greater amplitude of amygdala response for faces with direct gaze compared to faces with averted gaze. Finally, the effect of anxiety level was significant (F(1, 13) = 26.72, p<.0005), indicating that the mean amplitude of amygdala activity between 190 and 350 ms covaried with the participant’s anxiety level. There was no other significant effect or interaction.

Cortical Sources

We examined the cortical sources of magnetic activity concurrent with amygdala peak responses. Prominent cortical activities were observed between 130 and 170 ms in the bilateral fusiform regions, predominantly in the right hemisphere, extending into the lateral occipital regions (Figure 5A). No other cortical sources of magnetic activity were observed between 190 and 350 ms. We defined four source clusters encompassing the early peak of activity observed in the right and left fusiform and lateral occipital regions respectively (Figure 5B and C). We performed mean amplitude analyses of the activity from each of these clusters in the same time ranges as for the amygdala.

thumbnail
Figure 5. Cortical sources of activity.

A) Mean cortical current maps between 130 and 170 ms. The colour-coded activity of cortical dipole sources (in z-score units), averaged across all subjects and conditions, is superimposed on the ventral, back, right and left lateral views of an inflated template brain. Only sources with amplitude above 60% of the scale maximum activity are displayed. B) Time course of cortical source activity in fusiform regions under each experimental condition. The cortical source activity averaged across all subjects over the right and left fusiform clusters respectively (displayed in red on a ventral view of the brain, in a small inset) is presented. C) Time course of cortical source activity in lateral occipital regions under each experimental condition. The cortical source activity averaged across all subjects over the right and left lateral occipital clusters respectively (displayed in red on lateral views of the template brain, in small insets) is presented. The time windows where the mean amplitude of cortical source activity was measured are shaded in grey.

https://doi.org/10.1371/journal.pone.0074145.g005

As for the fusiform source clusters, the ANCOVA performed on the mean amplitude of fusiform responses between 130 and 170 ms showed a main effect of emotion (F(1, 13) = 22,72, p<.0005) and a main effect of anxiety level (F(1, 13) = 9.39, p<.01). The mean amplitude of fusiform responses was greater for fearful than for neutral faces and it increased with the participant's anxiety level. There was also a significant effect of hemisphere (F(1, 13) = 5.87, p<.05) that reflected greater activity in the right than in the left fusiform region. The ANCOVA did not reveal any other significant main effect or interaction.

Furthermore, the analysis of the mean amplitude measurement of fusiform activity in 4 consecutive 40-ms time windows between 190 and 350 ms showed a significant interaction between emotion and time window (F(3,39) = 5.68, εGG = 0.76, p<.005), indicative of an enhanced fusiform activity for fearful compared to neutral faces between 310 and 350 ms only (F(1, 13) = 10.83, p<.01). There was not any significant effect of gaze (F<1). The interaction between emotion and gaze was significant (F(1, 13) = 4.99, p<.05) revealing an effect of emotion on fusiform activity (in the form of enhanced response to fearful faces) under direct gaze only (F(1, 13) = 6.44, p<.05). No other effect or interaction reached significance.

As for the lateral occipital source clusters, the ANCOVA performed on the mean amplitude of lateral occipital responses between 130 and 170 ms showed a main effect of emotion (F(1, 13) = 7.89, p<.05) and a main effect of anxiety level (F(1, 13) = 7.52, p<.05). The mean amplitude of lateral occipital responses was greater for fearful than for neutral faces and it increased with the participant’s anxiety level. No other effect or interaction reached significance.

The ANCOVA performed on the mean amplitude of lateral occipital activity between 190 and 350 ms showed a significant interaction between emotion and time window that reflected greater lateral occipital activity for fearful than neutral faces in the 310–350 ms time window only (F(1, 13) = 18.80, p<.001). There was also a main effect of anxiety level (F(1, 13) = 4.70, p<.05) that showed that the mean amplitude of lateral occipital activity between 190 and 350 ms covaried with anxiety level. No other significant effect or interaction was found.

Discussion

The present study aimed at investigating the temporal dynamics of amygdala activity in response to faces with different emotional expressions (fearful/neutral) and gaze directions (direct/averted). Our hypothesis was that amygdala may participate in multiple stages of face processing in different time windows, with effects of emotion and gaze in different time ranges. In addition, subject’s anxiety level was taken into account as a modulatory variable. We used MEG together with an original source estimation technique based on the anatomical segmentation of medial temporal lobe structures to localize the amygdala volume in single subjects. We found amygdala activation starting from about 80 ms after face onset and reaching a prominent peak of activity at about 140 ms. Emotion modulated amygdala activity in two time ranges: between 130 and 170 ms, and later between 310 and 350 ms. Gaze direction influenced amygdala activity in a different time-range (190–350 ms). In addition, there was a sustained modulation of amygdala activity by anxiety level.

The first effect of emotion was observed between 130 and 170 ms with larger amplitude of amygdala activity for fearful compared to neutral faces. This is consistent with the recent papers that proposed other methods to estimate amygdala sources from MEG signals and showed emotional modulation of amygdala activity in similar time-ranges [29], [31], [32], [35], [37], [70], [71]. Furthermore, the timing of our first peak activation in the amygdala coincides with the early peak of amygdala activity modulated by emotional expression as reported in a recent intracranial ERP study [25], [26]. This early peak of amygdala activity may reflect an initial, rapid stage of face appraisal and emotion detection. Indeed, the subjects of our study were engaged in a simple task of detection of an occasional blue circle target. Thus amygdala activity in response to the faces may be considered as reflecting the automatic processes elicited when viewing faces with neutral and fearful expressions. This view is in line with that of Pourtois and coll. [25] and Sato and coll. [26]. It extends these previous studies by bringing converging evidence in healthy subjects using non-invasive source localization method from MEG signals. Our result of early emotional modulation of amygdala activity during passive fixation of faces is also consistent with the appraisal theory of amygdala function: Emotions conveyed by faces constitute particularly relevant stimuli that may be automatically and rapidly appraised, and amygdala has been proposed as a key relevance appraisal structure [72].

Most interestingly, we also found emotional modulation of amygdala activity in a late time window (310–350 ms). This late modulation is likely to reflect a different stage of face and/or emotion processing. Scalp event-related potential studies have consistently observed effects of emotional facial expression in similar late time-windows, corresponding to the P300 or Late Positive Potential components (LPP or LPC). These late effects have been generally interpreted as reflecting “cognitive” stages of emotional processing, related in particular to accessing the emotional significance of the facial cue [23], [73] (for reviews see [22], [74]). Altogether, our results thus suggest that the amygdala is involved in multiple stages of emotional face processing.

Gaze is another highly relevant facial cue, particularly in relation with facial emotion perception. The gaze direction of the seen faces modulated amygdala activity between 190 and 350 ms, with greater activity for direct than averted gaze. This effect was more marked in the right amygdala where gaze and emotion interacted, revealing increased activity in response to direct vs. averted gaze only for fearful faces. Several brain imaging studies have provided evidence for amygdala activation by gaze cues [13], [14] and for its involvement in the integration of gaze direction and emotion cues [17][19], [63], [75][77]. Our data are in line with these prior studies and bring information about the timing of this process. They suggest that the amygdala initially codes the emotional expression and then codes information related to gaze direction; the coding of this information involved sustained activity, emphasizing the importance of gaze and of the integration of gaze and emotional expression cues. Two previous EEG studies reported an interaction between gaze and emotion at 200–300 ms [78], [79]; our results point to the amygdala as a core structure involved in this effect. Our findings also nicely complement and extend recent intracranial data [26], [27].

It is important to mention that the temporal dynamics of amygdala responses to gaze and emotion cues from faces may be bound to the paradigm used. Indeed, in a recent MEG study, we found an early interaction between gaze and emotional expression cues over a right anterior temporo-frontal sensor set [80]. Although there was no source localization in this study, the topographical distribution of the effect suggested that the amygdala was involved, thus subtending an early integration of emotion and gaze cues. However, this study used a very different paradigm, depicting successive and dynamical changes in gaze direction and emotional expressions of pairs of faces, which may have favored the early interaction observed. Furthermore, the sustained interaction between gaze direction and facial expression found here reflected enhanced amygdala activity to fearful faces with direct gaze (relative to fearful faces with averted gaze and to neutral faces with direct gaze). This is in line with the previous fMRI studies that reported greater amygdala activation in response to direct compared to averted gaze fearful faces [63], [75]. However, several other fMRI studies have shown greater amygdala activation in response to averted compared to direct gaze fearful faces [17], [18]. All these studies used different tasks and paradigms, requiring an emotion intensity judgment [18] gender categorization [63], or passive fixation [17], and using only emotional faces [17], [75] or a combination of neutral faces and different types of emotional faces [18], [63]. Altogether, the discrepant results obtained regarding the direction of the interaction between gaze and emotion in the amygdala suggest that task and stimulus parameters may impact greatly on stimulus relevance, hence on the pattern of amygdala responses [7], [72].

We also took into account anxiety as a potential modulator variable of amygdala activity [61][63]. Amygdala activity was positively correlated with the subject’s anxiety level. This was observed on all time windows of measurement. This is to our knowledge the first evidence for an anxiety effect on amygdala activity using MEG. Our results are consistent with those of numerous fMRI studies that have demonstrated increased amygdala activation in response to fear-related stimuli associated to anxiety level, both in non-clinical populations [56], [58], [59], [63] and when considering several anxiety-related disorders including generalized anxiety disorder (GAD), post-traumatic stress disorder (PTSD), social anxiety, and phobia [62], [81] (see [62] and [57] for review). This enhanced amygdala activation has been related to a general hypervigilance for emotional – particularly threat-related – stimuli. Indeed, behavioral studies have revealed that anxious individuals show greater attention towards emotional stimuli [82]. Thus, our findings bring further support to the view that amygdala is activated by both endogenous fear-related factors such as anxiety level and exogenous relevant stimuli such as fearful faces.

Moreover, the influence of anxiety level was pervasive: it was observed from the early peak of amygdala activity and it was sustained over the whole time-range of our analysis. This is in line with the hypervigilance hypothesis that postulates an impact of anxiety on the early stage of stimulus processing. It is also consistent with previous EEG and MEG studies that have reported an influence of anxiety at several stages of face processing, both in early (∼100 ms) and late (>200 ms) time-ranges [83][94]. Our study expands these prior reports by providing for the first time direct evidence for an early (∼130 ms) influence of anxiety level on the amygdala sources of magnetic activity in response to faces.

It is interesting to note that the effect of anxiety was additive to that of emotion and gaze in the present study. Previous studies have reported that anxiety level modulated amygdala responsiveness to fearful expression and gaze cues [57][60], [63]. The reason why such modulation was not observed here is unclear. Some differences may arise from the use of different brain imaging methods. Although it is speculative, it may also be suggested that the lack of amygdala conditional modulation by anxiety might be the counterpart of a greater non-specific reactivity of the amygdala in anxious individuals that could serve to undermine the neural signal-to-noise ratio when processing emotionally relevant environmental cues. In any case, our results are complementary to those of previous studies, suggesting that endogenous and exogenous fear-related anxiety may impact additively on amygdala activity.

The early peak of amygdala activity was concomitant with prominent cortical sources in extrastriate visual cortex. These sources extended into the bilateral occipital regions and the fusiform regions (predominantly in the right hemisphere), and also peaked between 130 and 170 ms. This is consistent with the fact that the early peak of amygdala activity was concurrent with the M170 event-related field pattern at the scalp surface. The fusiform and lateral occipital responses were also modulated by emotion, with greater amplitude of activity in response to fearful than to neutral faces. Although the present study focused on source localization and therefore did not include ERF measurement at the scalp surface, these results are in line with previous studies that have reported emotional modulation of N/M170 [95][99] (for a review see [22]). In addition, in line with a recent study of Conty and coll. [76] combining EEG and fMRI, our findings suggest that amygdala activity contributes to the N/M170 recorded at the scalp surface.

The fusiform and lateral occipital responses were also enhanced for fearful relative to neutral faces between 310 and 350 ms. This is in agreement, although pointing to slightly later latencies, with ERP studies that have found early posterior negativity to emotional relative to neutral stimuli, maximum between 200 and 350 ms, with sources in posterior occipito-temporo-parietal regions [100], [101] (for review [102]). The finding of concomitant effects of emotion in both amygdala and extrastriate visual regions is consistent with the view that there is a tight functional coupling between the amygdala and regions of the visual pathway, involving recurrent, dynamic feed-forward and feedback flows of information between these regions [103][107]: The visual cortical regions and the amygdala seem to be involved dynamically and in concert in the multiple stages of face processing and emotion perception from faces.

With respect to the effects of gaze and anxiety level, more differentiated effects were obtained in the occipito-temporal regions and in the amygdala. There was not any significant effect of gaze direction in the occipito-temporal clusters, but an interaction between emotion and gaze in the fusiform regions between 190 and 350 ms that reflected greater fusiform activity for fearful compared to neutral faces in the direct gaze condition only. Subject’s anxiety level modulated the activity sustainably in the lateral occipital region only. Altogether it seemed that gaze direction and anxiety had more limited impact on the posterior visual regions than on the amygdala. This is consistent with the central position attributed to the amygdala as a stimulus relevance appraisal system [72].

Limitations of the Method

Can amygdala activity really be detected and localized from magnetic responses recorded at the scalp surface? The estimation of sources of electromagnetic signal collected outside the head requires solving an inverse problem, and it is therefore a delicate issue. By essence, the inverse problem is an ill-posed problem because of the non-unicity of its solution. In order to constrain its resolution to a limited amount of – and ideally unique – solution, regularization methods have been introduced [108]. Here, we chose to apply a method based on the minimum norm estimate (MNE) [109] that uses a distributed model of the source space with fixed locations. The MNE solution to the inverse problem has the advantage to be unique and to be insensitive to initialization conditions, which are severe limitations of multiple-dipole models. It is widely used to estimate the source distribution with minimal energy (L2-norm) (see [68] for a review). The MNE is however biased towards superficial cortical sources, which is detrimental to the detection of deep sources. This is why we used the depth-weighted version of the classical MNE estimator (wMNE) that corrects for this main bias [110], [111]. In addition, our source model was based on a precise anatomical definition of the source space including neocortex, amygdala, and hippocampus. In a recent assessment study, Attal and Schwartz [112] have quantified the spatial error of subcortical source localization using wMNE and other inverse operators to estimate amygdala and other subcortical sources; their source model was similar to ours (with the addition of the thalamus). According to this simulation study, the spatial localization error in the case of an isolated amygdala activation can be expected to be less than 1cm from the center of gravity of the actual neural currents. This value does not demonstrate a high regional specificity, but it is a promising rating of the quality of our model. Importantly, in the presence of simultaneous cortical and subcortical activations, wMNE was shown to involve the creation of less ghost deeper sources than other inverse operators (dSPM and sLORETA). Moreover, to evaluate the regional specificity of the neural currents estimated from the amygdala, we made complementary analyses of sources in the anterior lateral temporal regions directly between the amygdala and the MEG sensors closest to it, and in the body of the hippocampus near the amygdala region. The results of these analyses are provided in Figure S1. They showed mainly that the time course of responses both in the anterior lateral temporal region and in the hippocampus was notably different from the time course of amygdala responses, with no or limited emotional modulation in the early time range in particular (this emotional modulation was observed in the hippocampus only and could reflect some spreading or cross-talk of amygdala activity). Notably, hippocampal responses were much attenuated in comparison with amygdala responses (see Figure S1). As these structures have similar depth, this is a good hint to a focus of activity in the amygdala.

On a secondary note, it may be reminded here that amygdala responses to faces, and to emotional faces and gaze in particular, have been reported using other brain imaging modalities such as fMRI and intracerebral EEG recording. Taken together with the histological and functional arguments raised in our Introduction, and with the above data from neighboring regions, it forms converging lines supporting the view that we discerned actual amygdala responses.

In sum, we provided a new method for the estimation of amygdala activity from MEG signals using an imaging approach of the inverse modeling problem [50] that included the individual cortical brain surface and the individually segmented amygdala volume [52] as the source space. This approach is complementary to those that have been previously proposed using beamforming or dipole fitting procedures [29], [31], [32], [35], [37], [70], [71] (see [68], [113] for reviews). It confirms the feasibility of the study of amygdala responses with MEG, offering a unique insight in the temporal dynamics of brain responses including deep brain structures as it has been reported for the hippocampus [114][116] or the thalamus [36] (see [117] for review).

Conclusion

This study aimed at examining the neural response of the amygdala to fearful and neutral faces with direct and averted gaze using MEG. We used an original source imaging approach where individually segmented amygdala volumes were included in the source space model. We showed that amygdala is involved in multiple stage of face processing. There was a prominent early peak of amygdala activity between 130 and 170 ms that was enhanced for fearful faces. An effect of emotion was again observed between 310 and 350 ms, suggesting that amygdala was involved in different stages of the fearful face appraisal. Moreover, amygdala activity was modulated by gaze direction between 190 and 350 ms, with greater response to direct gaze faces and a more marked effect of gaze for the fearful faces in the right amydgala. Altogether, our results underline the role of the amydgala in the processing of social cues from faces. They promote MEG source imaging techniques as fruitful tools for the non invasive study of the temporal dynamics of neural responses from cortical and subcortical brain regions in healthy subjects.

Supporting Information

Figure S1.

Time course of the neural responses to fearful and neutral faces with direct and averted gaze in the lateral anterior temporal region (in A) and in the body of the hippocampus (in B). A) The time courses of the cortical source activity averaged across all subjects over the right and left lateral anterior temporal clusters respectively (displayed in red on lateral views of the template brain, in small insets) are presented. These time courses were notably different from the time course of amygdala responses, lacking the prominent peak of activation obtained in the amygdala between 130 and 170 ms; furthermore, the mean amplitude of anterior temporal activities between 130 and 170 ms was not modulated by emotional expression (F<1); in the later time range, (190–350 ms) there was only a very localised effect of gaze between 230 and 270 ms in the right hemisphere and for fearful faces only (F(1, 13) = 5.90, p<.04; the interactions between gaze, time window, and emotion, and between gaze, time window, emotion, and hemisphere were significant; F(3,39) = 4.59, εGG = 0.88, p<.01 and F(3,39) = 3.95, εGG = 0.69, p<.02 respectively). B) The time courses of hippocampus body sources averaged across all subjects over the right and left hippocampus body clusters respectively (defined with kmeans in each individual, as displayed in red on a typical left hippocampus mesh, in the small inset) is presented. These time courses showed a peak activity between 130 and 170 ms that was of markedly attenuated amplitude in comparison with the amygdala early peak, and weak later sustained response. The mean amplitude of hippocampus activity between 130 and 170 ms yielded a significant effect of emotion (F(1, 13) = 8.32, p<.02) that could reflect some spreading or cross-talk of amygdala activity. In contrast with the results obtained for the amygdala, there was not any effect of gaze on the mean amplitude of hippocampus response between 190 and 350 ms (F(1, 13)<3, p>.1). The areas shaded in grey represents the time windows where mean amplitude measurements were performed (see main text).

https://doi.org/10.1371/journal.pone.0074145.s001

(TIF)

Author Contributions

Conceived and designed the experiments: TD SD RJ NG. Performed the experiments: TD SD NG. Analyzed the data: TD SD YA MC RJ SM NG. Contributed reagents/materials/analysis tools: TD YA MC. Wrote the paper: TD SD SM NG.

References

  1. 1. LeDoux J, Bemporad JR (1997) The emotional brain. J Am Acad Psychoanal 25: 525–528.
  2. 2. Whalen PJ, Phelps EA (2009) The human amygdala. Guilford Press. 465 p.
  3. 3. Kluver H, Bucy PC (1939) Preliminary analysis of functions of the temporal lobes in monkeys. Arch Neurol Psychiatry 42: 979.
  4. 4. Adolphs R, Tranel D, Damasio H, Damasio A (1994) Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature 372: 669–672.
  5. 5. Adolphs R, Tranel D, Damasio H, Damasio AR (1995) Fear and the human amygdala. J Neurosci 15: 5879–5891.
  6. 6. Adolphs R, Spezio M (2006) Role of the amygdala in processing visual social stimuli. Prog Brain Res 156: 363–378.
  7. 7. Adolphs R (2010) What does the amygdala contribute to social cognition? Ann N Y Acad Sci 1191: 42–61.
  8. 8. Vuilleumier P (2005) How brains beware: neural mechanisms of emotional attention. Trends Cogn Sci 9: 585–594.
  9. 9. Ishai A, Schmidt CF, Boesiger P (2005) Face perception is mediated by a distributed cortical network. Brain Res Bull 67: 87–93.
  10. 10. Pierce K, Müller RA, Ambrose J, Allen G, Courchesne E (2001) Face processing occurs outside the fusiformface area’in autism: evidence from functional MRI. Brain 124: 2059–2073.
  11. 11. Winston JS, O’Doherty J, Dolan RJ (2003) Common and distinct neural responses during direct and incidental processing of multiple facial emotions. Neuroimage 20: 84–97.
  12. 12. Sergerie K, Chochol C, Armony JL (2008) The role of the amygdala in emotional processing: A quantitative meta-analysis of functional neuroimaging studies. Neurosci Biobehav Rev 32: 811–830.
  13. 13. George N, Driver J, Dolan RJ (2001) Seen gaze-direction modulates fusiform activity and its coupling with other brain areas during face processing. Neuroimage 13: 1102–1112.
  14. 14. Kawashima R, Sugiura M, Kato T, Nakamura A, Hatano K, et al. (1999) The human amygdala plays an important role in gaze monitoring: A PET study. Brain 122: 779–783.
  15. 15. Okada T, Sato W, Kubota Y, Usui K, Inoue Y, et al. (2008) Involvement of medial temporal structures in reflexive attentional shift by gaze. Soc Cogn Affect Neurosci 3: 80–88.
  16. 16. Cristinzio C, N’Diaye K, Seeck M, Vuilleumier P, Sander D (2010) Integration of gaze direction and facial expression in patients with unilateral amygdala damage. Brain 133: 248–261.
  17. 17. Hadjikhani N, Hoge R, Snyder J, De Gelder B (2008) Pointing with the eyes: the role of gaze in communicating danger. Brain Cogn 68: 1–8.
  18. 18. N’Diaye K, Sander D, Vuilleumier P (2009) Self-relevance processing in the human amygdala: Gaze direction, facial expression, and emotion intensity. Emotion 9: 798–806.
  19. 19. Sato W, Yoshikawa S, Kochiyama T, Matsumura M (2004) The amygdala processes the emotional significance of facial expressions: an fMRI investigation using the interaction between expression and face direction. Neuroimage 22: 1006–1013.
  20. 20. Sato W, Kochiyama T, Uono S, Yoshikawa S (2010) Amygdala integrates emotional expression and gaze direction in response to dynamic facial expressions. NeuroImage 50: 1658–1665.
  21. 21. Adams RB, Franklin RG, Kveraga K, Ambady N, Kleck RE, et al. (2012) Amygdala responses to averted vs direct gaze fear vary as a function of presentation speed. Soc Cogn Affect Neurosci 7: 568–577.
  22. 22. George N (2013) The facial expression of emotions. In: Vuilleumier P, Armony J, editors. Handbook of Human Affective Neuroscience. Cambridge University Press. 171–196.
  23. 23. Krolak-Salmon P, Hénaff MA, Vighetto A, Bertrand O, Mauguière F (2004) Early amygdala reaction to fear spreading in occipital, temporal, and frontal cortex: a depth electrode ERP study in human. Neuron 42: 665–676.
  24. 24. Meletti S, Cantalupo G, Benuzzi F, Mai R, Tassi L, et al. (2011) Fear and happiness in the eyes: An intra-cerebral event-related potential study from the human amygdala. Neuropsychologia 50: 44–54.
  25. 25. Pourtois G, Spinelli L, Seeck M, Vuilleumier P (2010) Temporal precedence of emotion over attention modulations in the lateral amygdala: Intracranial ERP evidence from a patient with temporal lobe epilepsy. Cogn Affect Behav Neurosci 10: 83–93.
  26. 26. Sato W, Kochiyama T, Uono S, Matsuda K, Usui K, et al. (2011) Rapid amygdala gamma oscillations in response to fearful facial expressions. Neuropsychologia 49: 612–617.
  27. 27. Sato W, Kochiyama T, Uono S, Matsuda K, Usui K, et al. (2011) Rapid Amygdala Gamma Oscillations in Response to Eye Gaze. PloS One 6: e28188.
  28. 28. Garnero L, Baillet S, Marin G, Renault B, Guérin C, et al.. (1999) Introducing priors in the EEG/MEG inverse problem. Electroencephalogr Clin Neurophysiol Suppl 50: 183–189.
  29. 29. Cornwell BR, Carver FW, Coppola R, Johnson L, Alvarez R, et al. (2008) Evoked amygdala responses to negative faces revealed by adaptive MEG beamformers. Brain Res 1244: 103–112.
  30. 30. Streit M, Ioannides AA, Liu L, Wölwer W, Dammers J, et al. (1999) Neurophysiological correlates of the recognition of facial expressions of emotion as revealed by magnetoencephalography. Cogn Brain Res 7: 481–491.
  31. 31. Streit M, Dammers J, Simsek-Kraues S, Brinkmeyer J, Wölwer W, et al. (2003) Time course of regional brain activations during facial emotion recognition in humans. Neurosci Lett 342: 101–104.
  32. 32. Liu L, Ioannides AA, Streit M (1999) Single trial analysis of neurophysiological correlates of the recognition of complex objects and facial expressions of emotion. Brain Topogr 11: 291–303.
  33. 33. Bayle DJ, Henaff MA, Krolak-Salmon P (2009) Unconsciously perceived fear in peripheral vision alerts the limbic system: a MEG study. PloS One 4: e8207.
  34. 34. Hung Y, Smith ML, Bayle DJ, Mills T, Cheyne D, et al. (2010) Unattended emotional faces elicit early lateralized amygdala-frontal and fusiform activations. Neuroimage 50: 727–733.
  35. 35. Liu L, Ioannides AA (2010) Emotion separation is completed early and it depends on visual field presentation. PLoS ONE 5: e9790.
  36. 36. Luo Q, Holroyd T, Jones M, Hendler T, Blair J (2007) Neural dynamics for facial threat processing as revealed by gamma band synchronization using MEG. Neuroimage 34: 839–847.
  37. 37. Maratos FA, Mogg K, Bradley BP, Rippon G, Senior C (2009) Coarse threat images reveal theta oscillations in the amygdala: A magnetoencephalography study. Cogn Affect Behav Neurosci 9: 133–143.
  38. 38. Hämäläinen M, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV (1993) Magnetoencephalography–theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev Mod Phys 65: 413–497.
  39. 39. Schumann CM, Amaral DG (2005) Stereological estimation of the number of neurons in the human amygdaloid complex. J Comp Neurol 491: 320–329.
  40. 40. Washburn MS, Moises HC (1992) Electrophysiological and morphological properties of rat basolateral amygdaloid neurons in vitro. J Neurosci 12: 4066–4079.
  41. 41. Kreiman G, Koch C, Fried I (2000) Category-specific visual responses of single neurons in the human medial temporal lobe. Nat Neurosci 3: 946–953.
  42. 42. Kuraoka K, Nakamura K (2007) Responses of single neurons in monkey amygdala to facial and vocal emotions. J Neurophysiol 97: 1379–1387.
  43. 43. Quiroga RQ, Reddy L, Kreiman G, Koch C, Fried I (2005) Invariant visual representation by single neurons in the human brain. Nature 435: 1102–1107.
  44. 44. Rolls ET (2000) Neurophysiology and functions of the primate amygdala, and the neural basis of emotion. In: Aggleton JP, editor. The amygdala: A functional analysis. Oxford: Oxford University Press. 447–478.
  45. 45. Yang J, Bellgowan PSF, Martin A (2012) Threat, domain-specificity and the human amygdala. Neuropsychologia 50: 2566–2572.
  46. 46. Mormann F, Dubois J, Kornblith S, Milosavljevic M, Cerf M, et al. (2011) A category-specific response to animals in the right human amygdala. Nat Neurosci 14: 1247–1249.
  47. 47. Pakkenberg B, Gundersen HJ (1997) Neocortical neuron number in humans: effect of sex and age. J Comp Neurol 384: 312–320.
  48. 48. Dumas T, Attal Y, Chupin M, Jouvent R, Dubal S, et al. (2010) MEG study of amygdala responses during the perception of emotional faces and gaze. 17th International Conference on Biomagnetism Advances in Biomagnetism–Biomag2010: 330–333.
  49. 49. Dumas T, Attal Y, Dubal S, Jouvent R, George N (2011) Detection of activity from the amygdala with magnetoencephalography. IRBM 32: 42–47.
  50. 50. Attal Y, Bhattacharjee M, Yelnik J, Cottereau B, Lefèvre J, et al. (2009) Modelling and detecting deep brain activity with MEG and EEG. IRBM 30: 133–138.
  51. 51. Hari R, Joutsiniemi SL, Sarvas J (1988) Spatial resolution of neuromagnetic records: theoretical calculations in a spherical model. Electroencephalogr Clin Neurophysiol 71: 64–72.
  52. 52. Chupin M, Mukuna-Bantumbakulu AR, Hasboun D, Bardinet E, Baillet S, et al. (2007) Anatomically constrained region deformation for the automated segmentation of the hippocampus and the amygdala: Method and validation on controls and patients with Alzheimer’s disease. NeuroImage 34: 996–1019.
  53. 53. Adolphs R (2002) Neural systems for recognizing emotion. Curr Opin Neurobiol 12: 169–177.
  54. 54. Haxby JV, Hoffman EA, Gobbini MI (2002) Human neural systems for face recognition and social communication. Biol Psychiatry 51: 59–67.
  55. 55. Ishai A (2008) Let’s face it: it’sa cortical network. Neuroimage 40: 415–419.
  56. 56. Bishop S, Duncan J, Brett M, Lawrence AD (2004) Prefrontal cortical function and anxiety: controlling attention to threat-related stimuli. Nat Neurosci 7: 184–188.
  57. 57. Calder AJ, Ewbank M, Passamonti L, Calder AJ, Ewbank M, et al. (2011) Personality influences the neural responses to viewing facial expressions of emotion. Philos Trans R Soc Lond B Biol Sci 366: 1684–1701.
  58. 58. Etkin A, Klemenhagen KC, Dudman JT, Rogan MT, Hen R, et al. (2004) Individual differences in trait anxiety predict the response of the basolateral amygdala to unconsciously processed fearful faces. Neuron 44: 1043–1055.
  59. 59. Ewbank MP, Lawrence AD, Passamonti L, Keane J, Peers PV, et al. (2009) Anxiety predicts a differential neural response to attended and unattended facial signals of anger and fear. Neuroimage 44: 1144.
  60. 60. Stein M, Simmons A, Feinstein J, Paulus M (2007) Increased amygdala and insula activation during emotion processing in anxiety-prone subjects. Am J Psychiatry 164: 318–327.
  61. 61. Bishop SJ (2007) Neurocognitive mechanisms of anxiety: an integrative account. Trends Cogn Sci 11: 307–316.
  62. 62. Etkin A, Wager TD (2007) Functional neuroimaging of anxiety: a meta-analysis of emotional processing in PTSD, social anxiety disorder, and specific phobia. Am J Psychiatry 164: 1476–1488.
  63. 63. Ewbank MP, Fox E, Calder AJ (2010) The interaction between gaze and facial expression in the amygdala and extended amygdala is modulated by anxiety. Front Hum Neurosci 4: 56.
  64. 64. Spielberger CD (1983) Manual for the State-Trait Anxiety Inventory STAI (Form Y)(“ Self-Evaluation Questionnaire”).
  65. 65. Lundqvist D, Flykt A, Öhman A (1998) The Karolinska Directed Emotional Faces-KDEF. CD-ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, Stockholm, Sweden. ISBN 91–630–7164–9.
  66. 66. Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM (2011) Brainstorm: A User-Friendly Application for MEG/EEG Analysis. Comput Intell Neurosci 2011: 1–13.
  67. 67. Huang MX, Mosher JC, Leahy RM (1999) A sensor-weighted overlapping-sphere head model and exhaustive head model comparison for MEG. Phys Med Biol 44: 423–440.
  68. 68. Baillet S, Mosher JC, Leahy RM (2001) Electromagnetic brain mapping. IEEE Signal Process Mag 18: 14–30.
  69. 69. Holmes CJ, Hoge R, Collins L, Woods R, Toga AW, et al. (1998) Enhancement of MR images using registration for signal averaging. J Comput Assist Tomogr 22: 324–333.
  70. 70. Garolera M, Coppola R, Mu\noz KE, Elvev\aag B, Carver FW, et al. (2007) Amygdala activation in affective priming: a magnetoencephalogram study. Neuroreport 18: 1449–1453.
  71. 71. Moses SN, Houck JM, Martin T, Hanlon FM, Ryan JD, et al. (2007) Dynamic neural activity recorded from human amygdala during fear conditioning using magnetoencephalography. Brain Res Bull 71: 452–460.
  72. 72. Sander D (2013) Models of Emotion. In: Armony J, Vuilleumier P, editors. The Cambridge Handbook of Human Affective Neuroscience. Cambridge University Press. 5–53.
  73. 73. Munte TF, Brack M, Grootheer O, Wieringa BM, Matzke M, et al. (1998) Brain potentials reveal the timing of face identity and expression judgments. Neurosci Res 30: 25–34.
  74. 74. Vuilleumier P, Pourtois G (2007) Distributed and interactive brain mechanisms during emotion face perception: evidence from functional neuroimaging. Neuropsychologia 45: 174–194.
  75. 75. Adams Jr RB, Gordon HL, Baird AA, Ambady N, Kleck RE (2003) Effects of gaze on amygdala sensitivity to anger and fear faces. Science 300: 1536.
  76. 76. Conty L, Dezecache G, Hugueville L, Grèzes J (2012) Early binding of gaze, gesture, and emotion: neural time course and correlates. J Neurosci 32: 4531–4539.
  77. 77. Straube T, Langohr B, Schmidt S, Mentzel HJ, Miltner WHR (2010) Increased amygdala activation to averted versus direct gaze in humans is independent of valence of facial expression. Neuroimage 49: 2680–2686.
  78. 78. Klucharev V, Sams M (2004) Interaction of gaze direction and facial expressions processing: ERP study. Neuroreport 15: 621–625.
  79. 79. Rigato S, Farroni T, Johnson MH (2010) The shared signal hypothesis and neural responses to expressions and gaze in infants and adults. Soc Cogn Affect Neurosci 5: 88–97.
  80. 80. Ulloa JL, Puce A, Hugueville L, George N (2012) Sustained neural activity to gaze and emotion perception in dynamic social scenes. Soc Cogn Affect Neurosci. 1749–5024.
  81. 81. Rauch SL, Shin LM, Wright CI (2003) Neuroimaging studies of amygdala function in anxiety disorders. Ann N Y Acad Sci 985: 389–410.
  82. 82. Cisler JM, Koster EHW (2010) Mechanisms of attentional biases towards threat in the anxiety disorders: An integrative review. Clin Psychol Rev 30: 203–216.
  83. 83. Bar-Haim Y, Lamy D, Glickman S (2005) Attentional bias in anxiety: A behavioral and ERP study. Brain Cogn 59: 11–22.
  84. 84. Eldar S, Yankelevitch R, Lamy D, Bar-Haim Y (2010) Enhanced neural reactivity and selective attention to threat in anxiety. Biol Psychol 85: 252–257.
  85. 85. Felmingham KL, Bryant RA, Gordon E (2003) Processing angry and neutral faces in post-traumatic stress disorder: an event-related potentials study. Neuroreport 14: 777–780.
  86. 86. Frenkel TI, Bar-Haim Y (2011) Neural activation during the processing of ambiguous fearful facial expressions: An ERP study in anxious and nonanxious individuals. Biol Psychol 88: 188–195.
  87. 87. Holmes A, Nielsen MK, Green S (2008) Effects of anxiety on the processing of fearful and happy faces: an event-related potential study. Biol Psychol 77: 159–173.
  88. 88. Kolassa IT, Kolassa S, Bergmann S, Lauche R, Dilger S, et al. (2009) Interpretive bias in social phobia: An ERP study with morphed emotional schematic faces. Cogn Emot 23: 69–95.
  89. 89. Kolassa IT, Kolassa S, Musial F, Miltner WHR (2007) Event-related potentials to schematic faces in social phobia. Cogn Emot 21: 1721–1744.
  90. 90. Li W, Zinbarg RE, Boehm SG, Paller KA (2008) Neural and behavioral evidence for affective priming from unconsciously perceived emotional facial expressions and the influence of trait anxiety. J Cogn Neurosci 20: 95–107.
  91. 91. Mueller EM, Hofmann SG, Santesso DL, Meuret AE, Bitran S, et al. (2009) Electrophysiological evidence of attentional biases in social anxiety disorder. Psychol Med 39: 1141–1152.
  92. 92. Mühlberger A, Wieser MJ, Herrmann MJ, Weyers P, Tröger C, et al. (2009) Early cortical processing of natural and artificial emotional faces differs between lower and higher socially anxious persons. J Neural Transm 116: 735–746.
  93. 93. Rossignol M, Philippot P, Bissot C, Rigoulot S, Campanella S (2012) Electrophysiological correlates of enhanced perceptual processes and attentional capture by emotional faces in social anxiety. Brain Res 1460: 50–62.
  94. 94. Walentowska W, Wronka E (2011) Trait anxiety and involuntary processing of facial emotions. Int J Psychophysiol 85: 27–36.
  95. 95. Eger E, Jedynak A, Iwaki T, Skrandies W (2003) Rapid extraction of emotional expression: evidence from evoked potential fields during brief presentation of face stimuli. Neuropsychologia 41: 808–817.
  96. 96. Lewis S, Thoma RJ, Lanoue MD, Miller GA, Heller W, et al. (2003) Visual processing of facial affect. Neuroreport 14: 1841–1845.
  97. 97. Pegna AJ, Landis T, Khateb A (2008) Electrophysiological evidence for early non-conscious processing of fearful facial expressions. Int J Psychophysiol 70: 127–136.
  98. 98. Sprengelmeyer R, Jentzsch I (2006) Event related potentials and the perception of intensity in facial expressions. Neuropsychologia 44: 2899–2906.
  99. 99. Vlamings PHJM, Goffaux V, Kemner C (2009) Is the early modulation of brain activity by fearful facial expressions primarily mediated by coarse low spatial frequency information? J Vis 9: 12.1–13.
  100. 100. Schupp HT, Ohman A, Junghöfer M, Weike AI, Stockburger J, et al. (2004) The facilitated processing of threatening faces: an ERP analysis. Emotion 4: 189–200.
  101. 101. Schupp HT, Stockburger J, Codispoti M, Junghöfer M, Weike AI, et al. (2006) Stimulus novelty and emotion perception: the near absence of habituation in the visual cortex. Neuroreport 17: 365–369.
  102. 102. Schupp HT, Flaisch T, Stockburger J, Junghöfer M (2006) Emotion and attention: event-related brain potential studies. Prog Brain Res 156: 31–51.
  103. 103. Amaral DG, Behniea H, Kelly JL (2003) Topographic organization of projections from the amygdala to the visual cortex in the macaque monkey. Neuroscience 118: 1099–1120.
  104. 104. Barbeau EJ, Taylor MJ, Regis J, Marquis P, Chauvel P, et al. (2008) Spatio temporal dynamics of face recognition. Cereb Cortex 18: 997–1009.
  105. 105. Catani M, Jones DK, Donato R (2003) Occipito-temporal connections in the human brain. Brain 126: 2093–2107.
  106. 106. LeDoux J (1996) The emotional brain: The mysterious underpinnings of emotional life. New York: Simon & Schuster.
  107. 107. Pessoa L, Adolphs R (2010) Emotion processing and the amygdala: from a’low road’to’many roads’ of evaluating biological significance. Nat Rev Neurosci 11: 773–783.
  108. 108. Tikhonov AN, Arsenin VY (1977) Solution of III-posed Problems. VH Winston & Sons, Washington, DC.
  109. 109. Hämäläinen MS, Ilmoniemi RJ (1994) Interpreting magnetic fields of the brain: minimum norm estimates. Med Biol Eng Comput 32: 35–42.
  110. 110. Lin F-H, Witzel T, Ahlfors SP, Stufflebeam SM, Belliveau JW, et al. (2006) Assessing and improving the spatial accuracy in MEG source localization by depth-weighted minimum-norm estimates. Neuroimage 31: 160–171.
  111. 111. Ramírez RR, Wipf D, Baillet S (2010) Neuroelectromagnetic source imaging of brain dynamics. In: Chaovalitwongse W, Pardalos PM, Xanthopoulos P, editors. Computational Neuroscience. Springer Optimization and Its Applications. Springer New York. 127–155.
  112. 112. Attal Y, Schwartz D (2013) Assessment of subcortical source localization using deep brain activity imaging model with minimum norm operators: a MEG study. PLoS ONE 8: e59856.
  113. 113. Hillebrand A, Singh KD, Holliday IE, Furlong PL, Barnes GR (2005) A new approach to neuroimaging with magnetoencephalography. Hum Brain Mapp 25: 199–211.
  114. 114. Tesche CD, Karhu J, Tissari SO (1996) Non-invasive detection of neuronal population activity in human hippocampus. Cogn Brain Res 4: 39–47.
  115. 115. Tesche CD, Karhu J (2000) Theta oscillations index human hippocampal activation during a working memory task. Proc Natl Acad Sci U S A 97: 919–924.
  116. 116. Tesche CD (1997) Non-invasive detection of ongoing neuronal population activity in normal human hippocampus. Brain Res 749: 53–60.
  117. 117. Attal Y, Maess B, Friederici A (2012) Head models and dynamic causal modeling of subcortical activity using magnetoencephalographic/electroencephalographic data. Rev Neurosci 23: 85–95.