Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

How Do You Say ‘Hello’? Personality Impressions from Brief Novel Voices

  • Phil McAleer ,

    Philip.McAleer@glasgow.ac.uk

    Affiliation School of Psychology, College of Science and Engineering, University of Glasgow, Glasgow, United Kingdom

  • Alexander Todorov,

    Affiliation Department of Psychology, Princeton University, Princeton, New Jersey, United States of America

  • Pascal Belin

    Affiliations School of Psychology, College of Science and Engineering, University of Glasgow, Glasgow, United Kingdom, Voice Neurocognition Laboratory, Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, United Kingdom, Département de Psychologie, Université de Montréal, Montréal, Quebec, Canada, Institut des Neurosciences de La Timone, Université Aix-Marseille, Marseille, France

Abstract

On hearing a novel voice, listeners readily form personality impressions of that speaker. Accurate or not, these impressions are known to affect subsequent interactions; yet the underlying psychological and acoustical bases remain poorly understood. Furthermore, hitherto studies have focussed on extended speech as opposed to analysing the instantaneous impressions we obtain from first experience. In this paper, through a mass online rating experiment, 320 participants rated 64 sub-second vocal utterances of the word ‘hello’ on one of 10 personality traits. We show that: (1) personality judgements of brief utterances from unfamiliar speakers are consistent across listeners; (2) a two-dimensional ‘social voice space’ with axes mapping Valence (Trust, Likeability) and Dominance, each driven by differing combinations of vocal acoustics, adequately summarises ratings in both male and female voices; and (3) a positive combination of Valence and Dominance results in increased perceived male vocal Attractiveness, whereas perceived female vocal Attractiveness is largely controlled by increasing Valence. Results are discussed in relation to the rapid evaluation of personality and, in turn, the intent of others, as being driven by survival mechanisms via approach or avoidance behaviours. These findings provide empirical bases for predicting personality impressions from acoustical analyses of short utterances and for generating desired personality impressions in artificial voices.

Introduction

Voices are saturated with cues to a person's age, gender, and affective state [1], with information being extractable whether listening to sentences [2], or sub-second vocal bursts [3], [4]. Within voice perception, a focus on personality has endured: from Cicero's apparent pondering of competent speakers in De Oratore; through the golden period of radio exploring status [5]; to modern researchers examining various personality traits including attractiveness and dominance [6][12].

Judgements of personality influence our social interactions. By example, perceived facial attractiveness affects numerous decisions that we make (for review see [13]), including mate choices, job selection and voting behavior [12], [14], [15]. Likewise, research has shown that perceived vocal personality influences mate selection, leader election, and consumer choices [16][19]. Such judgements from faces are formed after less than 100 ms exposure, [20], [21] and are consistent across observers [22], [23]. Furthermore, given that many judgements are based on static images or short interactions, these decisions are largely made without much knowledge of the person in question – often termed ‘zero acquaintance’ [23][27]. Yet, despite their equal relevance to our daily lives, the rapid attribution of personality traits to novel speakers is poorly understood. As such, the key traits for deriving first impressions of people from short vocalizations, and the vocal acoustics governing these traits, remain to be established.

Across various domains, it has been shown that consideration of numerous personality traits may be reduced to summary dimensions, in turn allowing for the estimation of other traits [28][30]. Fiske, Cuddy and Glick [31] revealed that judgements of social groups were summarised via a two-dimensional space comprising of warmth and competence. Likewise, Oosterhof & Todorov [32] showed personality impressions from faces were summarized by valence and dominance: Sutherland and colleagues [33] validated this model for faces, whilst also proposing a third dimension of attractiveness-youth. In voices, from scrambled mock-jury deliberations, female judgements of male speakers were summarised by ratings of friendliness and dominance [10], whilst Zuckerman and colleagues [12], utilising people reading passages of texts, found the three key dimensions explaining personality traits to be dominance, likeability and achievement. Furthermore, Montepare & Zebrowitz-McArthur [29] found comparable results exploring personality attribution of people reciting the alphabet. Thus one proposed understanding is that, typically, a two dimensional space can summarise all other traits, with one trait emphasising warmth/trust/likeability, and a second trait emphasising strength/dominance.

Such a solution is clearly influenced by the traits examined. For example, as perhaps a compromise to the numerous possible personality traits [34], and thus overlooking a summary space, many studies of face and voice perception have utilised traits from the Big Five Personality Model [35], [36]. As in studies exploring traits such as trust, intelligence and attractiveness, studies using the Big Five have again shown large consistency between viewers' ratings, as well as accuracy when compared to self-reports e.g. [10], [27], [34], [37][41]. Taken together, however, it is evident that humans make use of rapid judgements on connected traits to help guide our interactions [32], [34], [42].

Yet, the purpose of evaluative ‘spaces’ extends beyond personality judgements, with a putative role being for the establishment of the intent of others, and in turn, for the triggering of approach/avoidance behaviours by ourselves [32], [43]. This proposition lies in a series of hypotheses based on the overgeneralisations of age, attraction, emotion and familiarity [23], [43][45]. Secord [46] proposed that via a temporal extraction of momentary characteristics (such as a smile, or a deep voice) we label people with an enduring attribute, such as friendliness or strength. These generalisations allow for rapid – though not necessarily accurate – judgements of personality in an enriched world and, in turn, for appropriate action in terms of approach/avoidance to be taken. Thus, a judgement on the warmth dimension would evaluate a novel person as a friend or foe, whilst a judgement on dominance dimension would evaluate that person's ability to act on their intent. A generalization from a snapshot image to an enduring attribute appears to hold true for first impressions from faces [23], [32], [47][49], and indirectly in voices, using extended speech [6], [11], [12], [29].

However, previous vocal studies differ in comparison from other modalities in terms of the quantity, quality and relevance of the presented signal. Thus far, studies of personality traits of novel speakers have used long ‘irrelevant’ passages of speech (>10 s duration) [12], [29] but see [10], introducing influence from uncontrolled parameters of speech prosody. Studies that do utilise brief and socially relevant stimuli have a sole focus on attractiveness of the speaker, neglecting other potentially important traits [50], [51]. In contrast, face perception emphasises a ‘first impression’ scenario via rapid presentation of static faces (<100 ms duration). Thus it is pertinent to establish if a two-dimensional space holds true for short, socially relevant, vocal signals from novel speakers, akin to a ‘first impression’. From there, it would be possible to establish the acoustical properties of such judgements and perceived personalities. By extrapolation thus, if a brief vocal signal (sub 1 second) is akin to a static face [1], then given reported similarities in voice processing [1], [52] and face processing [53], one may propose that a two-dimensional space would explain first impression judgments of personality from voices.

This paper investigates the personality traits conveyed by novel speakers, via a single word, in an ambiguous scenario. We tested whether personality ratings, for both male and female voices, would be consistent across listeners, and if so, would they be appropriately summarized by a two-dimensional ‘social voice space’, similar to previous findings in face perception. Furthermore, given the lack of understanding of the underlying acoustics of such spaces, eight acoustical measures, summarising voice production, were tested for a relationship to any resultant summary spaces.

Methods

Ethics statement

All procedures (recording and experimental) were approved by the University of Glasgow ethics committee, and it was conducted in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki.

As the experiment was carried out online, participants gave informed consent prior, via first reading a series of statements regarding anonymity, freedom to withdraw, and secured storage of data, and by then clicking an online button to confirm that they have read and agreed to these statements. Participants were not permitted to take part without providing consent. This procedure was approved by the ethics committee of the University of Glasgow.

Participants

64 speakers (all Scottish; 28.2±10.2 years; 32 male) from the University of Glasgow undergraduate population were selected for stimuli recording. All speakers reported normal hearing and were given a monetary reward or partial course credit. Selection criteria included only people born and raised in Scotland to stabilise any potential effect of speaker provenance.

320 new participants (117 male; 28.5±10.6 years) from the same pool as above took part in the main voice rating experiment. Again all participants were given a monetary reward or partial course credit for taking part.

Stimuli

All 64 speakers were, individually, digitally recorded (16 bit mono, 44100 Hz, WAV format) reading an unfamiliar passage of text in a soundproof booth. Speakers were instructed to read the passage, involving a telephone conversation with direct speech, in a neutral tone. The word ‘hello’ was extracted from each recording, and normalised for power (RMS) and loudness via Matlab (the Mathworks). Stimuli had an average duration of 391 ms ± 65.1 ms and 390 ms ± 64.1 ms for male and female voices, respectively. ‘Hello’ was selected because it is a familiar, social word, with a medium-to-high range of common usage (British National Corpus) and its position and punctuation allowed for extraction. Cultural equivalents of ‘hello’ have previously been used to study ratings of attractiveness across culture (‘hujambo’ – Swahili, [50]) and across temporal modulation (‘bonjour’ - French, [51]). Example stimuli can be heard at http://vnl.psy.gla.ac.uk/socialvoices.php

Procedure

The experiment took place online. Participants were recruited via email and directed to a web address. Though no control was established over listening environment, participants were instructed to carry out the experiment in a quiet room using either headphones or speakers attached to the computer. Furthermore, recent research exists that shows data from online experiments is comparable to data from lab-based experiments [54], [55].

Each participant was pseudo-randomly assigned to one of ten rating scales taken from previous literature examining social traits in face, voice and person perception [12], [28], [29], [31], [32]: Aggressiveness, Attractiveness, Competence, Confidence, Dominance, Femininity, Likeability, Masculinity, Trustworthiness and Warmth. Each participant rated only one trait, as opposed to numerous traits e.g., [12], [56], to remove the influence of any halo effects (e.g. rating a speaker high on warmth would in turn make it difficult to rate that voice low on likeability).

For each stimulus, participants were asked, “Based on the voice, please rate how {TRAIT} is this person” on a 9-point Likert scale, ranging from 1 (extremely un{TRAIT}) to 9 (extremely {TRAIT}). No contextual grounding or scenario for the experiment was given: participants were not informed that the ‘hello’ stimuli they would hear came from longer extracts. After the experiment, participants confirmed they did not recognise any of the voices. Stimuli were blocked by gender and counterbalanced across subjects. Within gender, each voice was heard twice across two discrete blocks – no breaks. All voices were heard once per discrete block with presentation order randomised in both blocks. An untimed break was given prior to the change in gender. The uncompressed sounds were played through a FLASH (www.adobe.com) object interface running on all common open-source web browsers.

Data analysis

Exclusion criteria, stipulated prior to commencing the study, compensated for the lack of information on subject behaviour during the experiment: 1) that in each subject, two-thirds of the ratings given to the repetitions of each stimulus should fall within two rating points of each other (i.e. a voice rated 5 on first hearing would be later rated between 3 and 7); 2) that no subject should respond the same rating to greater than 75% of all voices (e.g. all voices rated 5). For the ratings of Masculinity and Femininity, criterion 2 was relaxed to 50%. Using these criteria, the data of 10 subjects (3.1%) were excluded.

Data collection occurred over a period of approximately one month. The number of participants per rating scale varied due to: 1) subjects removal; and 2) a technical constraint of the online programme where two subjects commencing at the same time would be assigned to the same trait. Inter-rater reliability is summarised in Table 1: all Cronbach Alphas > 0.88 and inter-rater agreement was considered high for each personality trait assessed.

thumbnail
Table 1. Cronbach alpha scores, indicating reliability of judgments, and number of participants per trait judgment.

https://doi.org/10.1371/journal.pone.0090779.t001

Principal Component Analysis (PCA) was used to convert all traits to orthogonal dimensions. Entered into the PCA were the z-transformed, mean ratings for all voices on each scale. Preliminary analysis indicated gender clustering, consistent with biological differences in male and female voices (e.g. higher average pitch in female voices) [57]. Thus, separate gender-driven PCAs were carried out, excluding masculinity and femininity, and only the gender-driven PCAs are reported: masculinity and femininity relationships to the main principal components (PCs) were explored via post-hoc correlational analyses. In addition, analyses comparing personality ratings across male and female raters listening to male and female voices is available in the Supplementary Information (File S1).

Acoustical measures

Acoustical measures were extracted from the 64 voice stimuli using PRAAT software (V4.2.07; default settings unless stipulated; http://www.praat.org) [58]. 8 measures were selected, in order to constrain multiple comparisons, that reflected differing parts of voice production and perception [59], [60], across the duration of each sound: 1) mean fundamental frequency/pitch (f0) (range: min 75 Hz; max: 600 Hz); 2) changing f0 (maxf0 minus minf0) as an index of intonation [61]; 3) glide, measured as f0-end minus f0-start; 4) formant dispersion, representing filtration of the sound by the vocal tract and related to vocal tract size (measured as the ratio between consecutive formant means, from F1 to F4 [62] using the Burg linear predictive coding algorithm installed in PRAAT [63] – maximum formant frequency was set to 5.5 kHz; window length  =  0.025 s); 5) harmonic-to-noise ratio (HNR) indicating roughness, via the forward cross-correlation method (mean value; time step  =  0.01 s; min pitch  =  75 Hz; periods per window  =  4.5); 6) jitter, a measure of local f0 variations, via Relative Average Perturbation (RAP) measuring the average absolute difference between a period and the average of that period and its two neighbours (shortest period  =  0.0001 s; longest period  =  0.02 s; max. period factor  =  1.3); 7) shimmer, a measure of amplitude variation, via the Amplitude Perturbation Quotient (APQ3) measuring the average absolute difference between a periods amplitude and the average of amplitude of its neighbours, divided by the average (shortest period  =  0.0001 s; longest period  =  0.02 s; max. period factor  =  1.3; max. amplitude factor  =  1.6); 8) alpha ratio, a measure of the source spectral slope [64] using the ratio of mean energy within low (0–1 kHz) vs. high frequencies (1–5 kHz) computed from the long-term average spectrum [65]. All measurements are taken across the duration of each sound (average 390.5 ms) and thus represent global values: this is inclusive of harmonicity measures, representing an indication of signal-to-noise ratio as calculated within PRAAT. Such measures are similar to those previously utilised in studies comparing animal and human vocalisations [66]. Stepwise Regression analysis (criteria: in p< = .05; out p = >.1) was used to establish a relationship between acoustical measures and PCs.

One note is that the acoustical measures selected may be considered imperfect estimates of values obtained using more standard sustained vowel conditions. For each stimulus, the measures are based on mean estimates across the full duration of the word ‘hello’, and although the word is brief, the measures could potentially be affected by time-varying aspects of speech. That said, the same measures were found to show consistent results across sustained vowels and ‘hello’ samples when examining the neural correlates of norm-based coding of voice identity [65], and therefore should be considered as valid for inclusion in this study.

Results

Male voices PCA

A two-dimensional solution was found to fit ratings for the male voices (without the Femininity and Masculinity ratings), explaining 88% of the variance (56.2% by the first principal component (PC1); 31.8% by PC2; 6.9% by PC3) (see Table 2). All traits, except Aggressiveness, loaded positively with varying strength onto PC1 (see Figure 1a). For PC2, Aggressiveness, Attractiveness, Competence, Confidence and Dominance had positive loadings, whereas Likeability, Trustworthiness and Warmth judgements had negative loadings.

thumbnail
Figure 1. Principal Component Analysis solutions and main correlates of the Social Voice Space.

A) The two dimensional solution of the Principal Component Analysis for male (left) and female (right) voices (black dots). Labels equate to: Agg – Aggressiveness; Att – Attractiveness; Com – Competence; Conf – Confidence; Dom – Dominance; Lik – Likeability; Tru – Trustworthiness; War – Warmth. B) Correlation plots between the ratings of trustworthiness (Tru - top row), dominance (Dom - bottom row), and the first (PC1) and second (PC2) principal components for male (left) and female (voices). Blue ‘+’ represent individual voices. Trustworthiness was chosen arbitrarily over Likeability due to the strong correlation between these two traits.

https://doi.org/10.1371/journal.pone.0090779.g001

thumbnail
Table 2. Loadings on the first two principal components of all social traits for the male and female voice PCAs, including variance explained.

https://doi.org/10.1371/journal.pone.0090779.t002

To establish summaries of the principal components, repeated PCAs were performed systematically removing individual scales as likely candidates, and correlating the new PCs to the removed personality scales. An original scale is proposed as a suitable summary if it correlates strongly with one PC and weakly with the other. PC1 of all ratings excluding Trustworthiness, highly correlated with Trustworthiness ratings (rs = .92, p<.001; Trustworthiness to PC2, rs = −.19, n.s). Likewise, PC1 of all ratings excluding Likeability, highly correlated with Likeability ratings (rs = .95, p<.001; Likeability to PC2, rs = −.3, n.s.). In turn, ratings of Trustworthiness and Likeability were strongly correlated (rs = .93, p<.001). Excluding Dominance, PC2 correlated strongly with Dominance ratings (rs = .94, p<.001; Dominance to PC1, rs = .06, n.s.) (Fig. 1b). A three dimensional solution to this PCA, and analysis based on gender of rater, is shown in the Supplementary Information (File S1; see Table S1 for 3D PCA, and Table S2, Table S3 & Table S4 for analysis by rater gender).

Exploring Masculinity and Femininity ratings to male voices, the all-traits PC1 was positively correlated to Femininity (rs = .63, p<.001) and negatively to Masculinity (rs = −.46, p<.05); PC2 was positively correlated to Masculinity (rs = .50, p<.001) and negatively to Femininity (rs = −.4, p<.05).

Female voices PCA

Following the same criteria, a two dimensional solution was found to explain 88.1% of the variance (PC1: 59.54%; PC2: 28.53%; PC3: 5.2%). All loadings on PC1 were positive except Aggressiveness. On PC2, Aggressiveness, Competence, Confidence and Dominance were all positive (Table 2). PC1 excluding Trustworthiness was highly correlated with Trustworthiness ratings (rs = .93, p<.001; Trustworthiness to PC2, rs = −.05, n.s.). Excluding Likeability, PC1 was highly correlated with Likeability ratings (rs = .92, p<.001; Likeability to PC2, rs = −.04, n.s.). Again, ratings of Trustworthiness and Likeability were highly correlated with one another (rs = .85, p<.001). PC2, excluding Dominance, was highly correlated with Dominance ratings (rs = .84, p<.001; Dominance to PC1, rs = .51, p<.05). Despite having a moderate correlation to PC1, Dominance was selected as an appropriate summary for female PC2 as the next appropriate trait, Aggressiveness, had a similar relationship to PC1 but a weaker relationship to PC2 (Aggression to PC1, rs = .47, p<.05; Aggression to PC2, rs = .78; Aggression to Dominance, rs = .46, p<.05). A three dimensional solution to this PCA, and analysis based on gender of rater, is shown in the Supplementary Information (File S1; see Table S1 for 3D PCA, and Table S2, Table S3 & Table S4 for analysis by rater gender).

Incorporating Masculinity and Femininity to female voices, a relationship was only found for PC1 in that, as PC1 (Trustworthiness) increased, perceived Femininity increased (rs = .7, p<.001) and Masculinity decreased (rs = −.7, p<.001).

Acoustical measures

Independently by gender, stepwise regression analyses were performed using eight acoustical measures to explain variance in the first two principal components. For PC1 in male voices, a linear combination of f0 (b = 0.48, p<.05) and HNR (b = −0.57, p<.001), explained 49% of the variance, (R = .7, F(2,29) = 14.05, p<.001); in female voices, HNR (b = −0.44, p<.01), glide (b = −0.58, p<.001) and intonation (b = 0.6, p<.001), explained 68% of the variance in PC1 values (R = .82, F(3,28) = 20.12, p<.001). Regarding PC2, in male voices, a combination of alpha (b = −0.25, p = .06), f0 (b = −.037, p<.05), HNR (b = −0.41, p<.05) and formant dispersion (b = −0.29, p<.05), explained 68% of the variance (R = .82, F(4,27) = 14.2, p<.001); for female voices, dispersion (b = −.43, p<.05) and f0 (b = .34, p<.05) explained 27% of the variance (R = .52, F(2,29) = 5.56, p<0.05).

Secondary analysis of attractiveness

Across gender, subjective inspection of the original PCA solutions were similar, differing largely only in the weighting of Attractiveness. Looking within gender of speaker, for male voices, perceived Attractiveness was significantly more correlated with PC2 (dominance) (rs = .72, p<.001; PC1: rs = .29, n.s.; tDifference  =  8.29, p<0.05). In contrast, for female voices, perceived Attractiveness was significantly more correlated with PC1 (valence) (rs = .74, p<.001; PC2: rs = −.45, p<.05; tDifference  =  6.35, p<0.01). Across gender of speaker, perceived female vocal attractiveness was significantly more correlated to PC1 than male vocal attractiveness (tDifference  =  2.79, p<0.05). Finally, male vocal attractiveness was significantly more correlated to PC2 than female vocal attractiveness (tDifference  =  10.18, p<0.01).

Given that attractiveness can also be viewed as a product of personality traits, and is highly prevalent in the literature (e.g. [6][9], [12], [50], [56], [67], [68]), we explored the ability to predict Attractiveness ratings based on the ‘social voice space’, separately for male and female voices. In separate PCA analyses, after removing Attractiveness, personality ratings for both male and female voices were summarised by a two-dimensional space explaining 90% of the variance. For male voices, Likeability, Trustworthiness and Warmth were all strongly correlated with PC1 (all r>0.9, p<.001); Dominance correlated strongly with PC2 (rs = .98, p<.001). For female voices, Likeability, Trustworthiness, Warmth and Competence all had strong correlations with PC1 (all r>0.9, p<.001); Aggressiveness (rs = .84, p<.001) and Dominance (rs = .77, p<.001) had good correlations with PC2.

Stepwise regression analysis showed that a linear combination of PC1 (b = 0.4, p<.01) and PC2 (b = 0.7, p<.01), explained 54% of the variance in male Attractiveness ratings (R = .75, F(2,29) = 19.2, p<.001). Both principal components had a positive influence, suggesting as PC1 (Trustworthiness, Likeability) and PC2 (Dominance) increase, so does perceived male vocal Attractiveness, with PC2 (Dominance) having a marginally stronger influence than PC1 (Trustworthiness). Finally, in females, a similar analysis showed that a linear combination of PC1 (b = 0.76, p<.01) & PC2 (b = −0.29, p<.01), explained 66% of the variance in female attractiveness ratings, (R = .81, F(2,29) = 27.65, p<.001). PC1 had a strong positive influence whilst PC2 had a weak negative influence, suggesting that perceived female vocal Attractiveness is largely influenced by increasing PC1 (Trustworthiness, Likeability, Warmth).

Discussion

The results showed that from brief utterances containing limited information, akin to a first impression, listeners showed high consistency in their ratings of perceived personality. Furthermore, a two-dimensional ‘social voice space’, with a first dimension (PC1) corresponding to perceived likeability and trustworthiness, aligning with ‘valence’ [32], and an orthogonal dimension (PC2) corresponding to perceived dominance, summarized all perceived personality traits in both genders. Despite limited control of experimental listening environment, results are aligned with findings that observers form consistent and reliable impressions from brief exposure to faces [21], [69], [70] and extracts of extended speech [12], [56]. Moreover, agreement on a number of perceived traits, such as warmth, has been shown across cultures for faces [70] and voices [24]. Similarities across personality spaces in voice [10][12], [29] and face perception [32] supports the suggestion that the processing of faces and voices, at both the perceptual and neural level, operates via equivalent comparisons of the available information to each modality [1], [52], [53], [71].

The ‘social voice spaces’ witnessed are not only consistent across voice gender, with the exception of attractiveness judgements, but are in agreement with dimensional solutions obtained in various studies exploring: semantic relationships in words [28]; scrambled voice percepts and extended extracts [10], [12], [29]; face perception [32]; and intergroup relationships [31]. These dimensional spaces map strongly with each other when collapsing interchangeable names such as valence and social goodness, or dominance and strength. Each dimensional solution contains an element of positivity or trust, and an element of ability or competence to act. The current use of short socially relevant vocal bursts highlights the validity of these dimensions in establishing first impressions from voices.

Across gender, only the PCA weighting of attractiveness appeared to vary largely. Male vocal attractiveness correlated most strongly with dominance, whilst female vocal attractiveness was most associated with valence. When attractiveness was explored as a product of the traits, i.e. as opposed to an individual trait, components of dominance and valence explained greater than half the variance in male vocal attractiveness: dominance having the stronger influence. In contrast, in female voices, components of valence and dominance/aggression explained almost all of the variance, with the valence component having the strongest effect. These results were largely consistent when exploring the relationship by gender of rater. Previous research has suggested similar results in face [72] and voice perception [68], [73], with findings pointing to increased attractiveness as masculinity/strength increases in males and as friendliness/warmth increases in females.

This study indicates that estimates of attractiveness can occur rapidly, from a brief signal, and the bases of these estimates are consistent with relationships witnessed from hearing longer speech extracts. However, it is worth noting that despite the prevalence of study of vocal attractiveness, it was not one of the two key traits in the PCA, and thus its role is potentially minimal when establishing a first impression of a novel speaker. A three dimensional PCA solution of the current study suggested attractiveness may be related to PC3, though the explained variance was small and any relationship was not significant: in turn, supporting a two dimensional solution. However, attractiveness as a third dimension has been indicated via a validation study of the Oosterhof and Todorov face personality model [32] using 1000 faces [33]. Thus the role of attractiveness should not be marginalised without further study.

Parsing out the true relationship of trustworthiness, dominance and attractiveness, and how we utilise the available signal to make such judgments, may be possible via modern methods of stimulus morphing and averaging [32]. For example, it is known that averaging both faces and voices can increase attractiveness [7], [74], [75]; largely due to smoothing of the respective signal. In turn increased attractiveness can increase trustworthiness though the two are not necessarily directly related [76], [77]. Additionally, at the neural level, it has been shown that we make judgements of identity and attractiveness based on stored prototypes [65], [74], [78][80]. For voices, this prototype is explained by at least two of the acoustical variables that partially determine trustworthiness, dominance, and attractiveness - namely f0 and dispersion [65]. Therefore, it is possible that personality perception also relies on comparison to a prototype at least similar, if not the same, as the one used to establish identity. Furthermore, given the consistency of personality ratings across participants, such a prototype would not necessarily be specific to an individual, but may share common properties within a culture.

Analysing the underlying acoustical information, intonation, glide, and HNR were involved in explaining valence in female voices while pitch and HNR explained valence in male voices. For females, a more positive perceived valence appears associated with a greater rise in pitch between the first and second vowel of the word ‘Hello’ (rising intonation); a more negative valence is associated with a falling intonation. The relationship between intonation and valence aligns with a connection reported between facial features and valence, e.g. facial expression [23], [32]: both vocal intonation and facial expression are malleable features of their respective modalities, and these transient adjustable features may drive percepts of valence. For males, an average higher pitch relates to increased valence: this would bring the pitch closer to that of females, resulting in increased friendliness due to stereotyping [81]. The association with HNR in both genders may relate to changes in age: decreasing HNR has been proposed in vocal aging, either chronological or physiological [82], though findings are inconclusive [83]. It is possible that older voices are perceived as more friendly/trustworthy, than younger voices, though this would conflict with reports that younger voices are perceived as warmer, more honest and less dominant [6], [11], [29]. Discrepancies with previous studies may result from the use of longer speech patterns introducing additional parameters known to influence trait impressions, e.g. speech rate [18], [73].

In perceived male vocal dominance, associations were found with decreasing average pitch and formant dispersion, along with decreasing alpha and HNR; decreasing formant dispersion was also associated with female dominance, along with increased average pitch. Thus, lower pitched male voices, across the sound duration, were perceived as more dominant; conversely, higher pitched male voices were perceived as less dominant. In contrast, higher average pitch was associated with increased dominance in female voices. Extensive research shows that listeners are adept at judging various physical characteristics of a speaker from their voice, such as age, height, weight, and body shape, to a varying degree of accuracy [5], [73], [84][89]. Such ability may have arisen via adaptation mechanisms in terms of projection of a desired status, culture, or of suitability for mate selection [84][86], [90]. The relationship found in male voices in the current study is in-keeping with reports that pitch is often erroneously used to distinguish powerful characteristics such as height, strength and leadership [16], [91]. People assume lower pitch equates to increased strength, particularly in males, due to misconceptions regarding the vocal system structure [91]. The pitch/dominance link may reflect this at a personality level. In reality, formant dispersion is a better gauge as it relates more closely to vocal tract length [62], [84], [92]. Relationships between formant dispersion and dominance have previously been shown in human and non-human mammals [93], [94], and are re-iterated in this study. Increased average pitch in females is normally associated fecundity [50], not dominance, and the relationship found here should be taken with caution as female dominance was the least explained trait, in terms of variance, by the acoustics predictors. Overall, we suggest that such longitudinal changes in vocal acoustics, (e.g. dispersion, HNR), mirror impressions of dominance and physical strength in faces, signalled by ‘static’ aspects of faces (e.g., facial size, inter-ocular distance etc.) [22], [95].

Overall, we form trait impressions as a means to establishing the intent of others, and of selecting appropriate approach and avoidance behaviours. As witnessed, both in the current paper for voices, and in previous papers for faces, these judgements occur rapidly, which is in-keeping with an evolutionary pressure for their existence. A proposal for their creation, largely studied in face perception, revolves around the over-generalization hypotheses [43], [44], whereby we make judgements based on the extrapolation of momentary states to stable dimensions [32], [46]: i.e. a person who smiles (momentary state) is perceived as warm (stable dimension). Such relationships between emotion and personality in voices are as yet only subjective [6], [11], [29]. That said, utilising novel morphing techniques for vocal sounds [3], [4], [96] would make the link between vocal emotion and vocal personality, a tangible and pertinent line of study.

A possible caveat to the present study is that PCA is directed by its input: an untested trait may have greater influence than the proposed dimensions. However, studies utilising free-response data have ultimately reduced to semantically similar dimensions of Valence and Dominance [12], [28], [32]. Thus, in the current work, Valence and Dominance remain strong candidates as the foundations of rapid trait impressions for novel speakers in an ambiguous scenario.

Additionally, the accuracy of first impression judgements remains questionable. Accuracy is an important aspect as if people’s judgements of personality were continually wrong then any subsequent impression of intent based on this perceived personality would be misleading. Typically, accuracy is determined via convergence between self-ratings and ratings by acquaintances. Previously, results have shown only moderate convergence at best, and for a limited number of traits such as dominance and honesty [23], [44], [97]. One problem with trait attribution is the assumption of context-independent personality. People may accurately infer the momentary state of another, but the same inference may not hold when generalised across situations and time. Thus, in order to establish how accurate we are in determining the personality of others, a context-based measure of accuracy would be more appropriate [98].

Finally, the question of consistency of voice personality over time and delivery should be addressed. In the current study we utilised a socially relevant, one word sample of direct speech, read from a passage, whilst previous research has used either long passages or various exerts of people speaking (scrambled or not) e.g. [5], [10], [12], [29], [56], [86], [99][103]. How these methodologies compare is an interesting question. Clearly the longer the passage heard and the more natural the phrasing, the more variables are introduced relating to voice quality which may alter the perceived personality [18], [73], [103]. That said, using read exerts of direct speech maintains content across speakers whilst allowing an element of conversation: research has shown that people engage in a naturalistic manner when reading direct speech, as opposed to indirect speech, and that listeners process it in a fashion similar to when having a conversation [104], [105]. Thus, given the consistency of the current findings to previous studies, it could be hypothesised that our initial impressions of personality will persist, irrespective of the manner and duration of what we hear a person say. This would reflect face literature where personality judgements from brief exposures to static faces are consistent to those from longer exposures or from dynamic videos of faces [21], [106]. Taken together, these findings would reiterate the importance of establishing a good first impression.

Conclusions

Listeners show high agreement when deriving first impressions of novel speakers. A two dimensional ‘social voice space’, constructed via ratings of Valence and Dominance, allows for the extrapolation of all other traits, regardless of gender. Acoustical analysis reveals that Valence is related to pitch variation, whereas Dominance is related to more stable parameters. Furthermore, first impression of vocal attractiveness in male voices relates to perceived strength, whilst in females, vocal attractiveness relates to perceived warmth and trustworthiness.

This study provides an empirical basis for the assessment of personality from voice. In establishing the acoustics that drive certain percepts, people and algorithms may be instructed on the necessary alterations to obtain a desired projection: this has endless application in fields as diverse as business, computing, engineering and advertising. Focus must now turn to stability across longer utterances and differing contexts to fully capitalise on the relevance for modernised voice activated/controlled systems, and for understanding how we are influenced by the signals received from others.

Supporting Information

File S1.

Supplementary Information, Analysis and Interpretation of PCAs.

https://doi.org/10.1371/journal.pone.0090779.s001

(DOCX)

Table S1.

A three dimensional solution to the ‘social voice’ space.

https://doi.org/10.1371/journal.pone.0090779.s002

(DOCX)

Table S2.

Proportion of each gender per personality scale.

https://doi.org/10.1371/journal.pone.0090779.s003

(DOCX)

Table S3.

A three dimensional solution for female voices by rater gender.

https://doi.org/10.1371/journal.pone.0090779.s004

(DOCX)

Table S4.

A three dimensional solution for male voices by rater gender.

https://doi.org/10.1371/journal.pone.0090779.s005

(DOCX)

Acknowledgments

The authors are thankful for the guidance and assistance of Marc Becirspahic in online programming.

Author Contributions

Conceived and designed the experiments: PM AT PB. Performed the experiments: PM. Analyzed the data: PM AT PB. Contributed reagents/materials/analysis tools: PM. Wrote the paper: PM AT PB.

References

  1. 1. Belin P, Bestelmeyer PE, Latinus M, Watson R (2011) Understanding voice perception. Br J Psychol 102: 711–725.
  2. 2. Perrachione TK, Del Tufo SN, Gabrieli JDE (2011) Human Voice Recognition Depends on Language Ability. Science 333: 595–596.
  3. 3. Bestelmeyer PEG, Rouger J, DeBruine LM, Belin P (2010) Auditory adaptation in vocal affect perception. Cognition 117: 217–223.
  4. 4. Latinus M, Belin P (2012) Perceptual Auditory Aftereffects on Voice Identity Using Brief Vowel Stimuli. Plos One 7: e41384.
  5. 5. Allport GW, Cantril H (1934) Judging Personality from Voice. J Soc Psychol 5: 37–54.
  6. 6. Berry DS (1990) Vocal Attractiveness and Vocal Babyishness - Effects on Stranger, Self, and Friend Impressions. J Nonverbal Behav 14: 141–153.
  7. 7. Bruckert L, Bestelmeyer P, Latinus M, Rouger J, Charest I, et al. (2010) Vocal attractiveness increases by averaging. Curr Biol 20: 116–120.
  8. 8. Feinberg DR, DeBruine LM, Jones BC, Little AC, O'Connor JJM, et al. (2012) Women's self-perceived health and attractiveness predict their male vocal masculinity preferences in different directions across short- and long-term relationship contexts. Behavioral Ecology and Sociobiology 66: 413–418.
  9. 9. Hughes SM, Dispenza F, Gallup GG (2004) Ratings of voice attractiveness predict sexual behavior and body configuration. Evol Hum Behav 25: 295–304.
  10. 10. Scherer KR (1972) Judging Personality from Voice - Cross-Cultural Approach to an Old Issue in Interpersonal Perception. J Pers 40: 191–210.
  11. 11. Zebrowitz-McArthur LA, Montepare JM (1989) Contributions of a babyface and a childlike voice to impressions of moving and talking faces. J Nonverbal Behav 13: 189–203.
  12. 12. Zuckerman M, Driver RE (1989) What Sounds Beautiful Is Good - the Vocal Attractiveness Stereotype. J Nonverbal Behav 13: 67–82.
  13. 13. Little AC, Jones BC, DeBruine LM (2011) Facial attractiveness: evolutionary based research. Philos T R Soc B 366: 1638–1659.
  14. 14. Langlois JH, Kalakanis L, Rubenstein AJ, Larson A, Hallam M, et al. (2000) Maxims or myths of beauty? A meta-analytic and theoretical review. Psychol Bull 126: 390–423.
  15. 15. Luevano VX, Zebrowitz LA (2007) Do impressions of health, dominance, and warmth explain why masculine faces are preferred more in a short-term mate? Evolutionary Psychology 5: 15–27.
  16. 16. Klofstad CA, Anderson RC, Peters S (2012) Sounds like a winner: voice pitch influences perception of leadership capacity in both men and women. P Roy Soc B-Biol Sci 279: 2698–2704.
  17. 17. Nass C, Lee K (2001) Does computer-synthesised speech manifest personality? Experimental tests of recognition, similarity-attraction, and consistency-attraction. J Exp Psychol-Appl 7: 171–181.
  18. 18. Scherer KR (1979) Voice and speech correlates of perceived social influence in simulated juries. In: St.Clair HGR, editor. The social psychology of language. London: Blackwell. pp. 88–120.
  19. 19. Tigue CC, Borak DJ, O'Connor JJM, Schandl C, Feinberg DR (2012) Voice pitch influences voting behavior. Evol Hum Behav 33: 210–216.
  20. 20. Todorov A, Pakrashi M, Oosterhof NN (2009) Evaluating Faces on Trustworthiness after Minimal Time Exposure. Social Cognition 27: 813–833.
  21. 21. Willis J, Todorov A (2006) First impressions: making up your mind after a 100-ms exposure to a face. Psychological Sciences 17: 592–598.
  22. 22. Todorov A, Said CP, Engell AD, Oosterhof NN (2008) Understanding evaluation of faces on social dimensions. Trends in Cognitive Sciences 12: 455–460.
  23. 23. Zebrowitz LA, Montepare JM (2008) Social Psychological Face Perception: Why Appearance Matters. Soc Personal Psychol Compass 2: 1497.
  24. 24. Passini FT, Norman WT (1966) A universal conception of personality structure? Journal of personality and social psychology 4: 44–49.
  25. 25. Kenny DA, Horner C, Kashy DA, Chu LC (1992) Consensus at zero acquaintance: replication, behavioral cues, and stability. Journal of personality and social psychology 62: 88–97.
  26. 26. Borkenau P, Liebler A (1993) Consensus and Self-Other Agreement for Trait Inferences from Minimal Information. J Pers 61: 477–496.
  27. 27. Kramer RS, Ward R (2010) Internal facial features are signals of personality and health. Q J Exp Psychol 63: 2273–2287.
  28. 28. Rosenberg S, Nelson C, Vivekananthan PS (1968) A Multidimensional Approach to Structure of Personality Impressions. Journal of Personality and Social Psychology 9: 283–294.
  29. 29. Montepare JM, Zebrowitz-McArthur LA (1987) Perceptions of Adults with Child-Like Voices in 2 Cultures. J Exp Soc Psychol 23: 331–349.
  30. 30. Wiggins JS (1979) Psychological Taxonomy of Trait-Descriptive Terms - Interpersonal Domain. Journal of Personality and Social Psychology 37: 395–412.
  31. 31. Fiske ST, Cuddy AJ, Glick P (2007) Universal dimensions of social cognition: warmth and competence. Trends Cogn Sci 11: 77–83.
  32. 32. Oosterhof NN, Todorov A (2008) The functional basis of face evaluation. Proc Natl Acad Sci U S A 105: 11087–11092.
  33. 33. Sutherland CA, Oldmeadow JA, Santos IM, Towler J, Michael Burt D, et al. (2013) Social inferences from faces: ambient images generate a three-dimensional model. Cognition 127: 105–118.
  34. 34. Penton-Voak IS, Pound N, Little AC, Perrett DI (2006) Personality judgments from natural and composite facial images: More evidence for a “kernel of truth” in social perception. Social Cognition 24: 607–640.
  35. 35. Norman WT (1963) Toward an Adequate Taxonomy of Personality Attributes - Replicated Factor Structure in Peer Nomination Personality Ratings. J Abnorm Psychol 66: 574–583.
  36. 36. Mccrae RR, Costa PT (1987) Validation of the 5-Factor Model of Personality across Instruments and Observers. Journal of Personality and Social Psychology 52: 81–90.
  37. 37. Miyake K, Zuckerman M (1993) Beyond Personality Impressions - Effects of Physical and Vocal Attractiveness on False Consensus, Social-Comparison, Affiliation, and Assumed and Perceived Similarity. J Pers 61: 411–437.
  38. 38. Zuckerman M, Miyake K, Elkin CS (1995) Effects of Attractiveness and Maturity of Face and Voice on Interpersonal Impressions. J Res Pers 29: 253–272.
  39. 39. Berry DS, Brownlow S (1989) Were the Physiognomists Right - Personality-Correlates of Facial Babyishness. Pers Soc Psychol B 15: 266–279.
  40. 40. Little AC, Perrett DI (2007) Using composite images to assess accuracy in personality attribution to faces. Br J Psychol 98: 111–126.
  41. 41. Shevlin M, Walker S, Davies MNO, Banyard P, Lewis CA (2003) Can you judge a book by its cover? Evidence of self-stranger agreement on personality at zero acquaintance. Pers Indiv Differ 35: 1373–1383.
  42. 42. Hassin R, Trope Y (2000) Facing faces: Studies on the cognitive aspects of physiognomy. Journal of Personality and Social Psychology 78: 837–852.
  43. 43. McArthur LZ, Baron RM (1983) Toward an Ecological Theory of Social-Perception. Psychological Review 90: 215–238.
  44. 44. Zebrowitz LA, Collins MA (1997) Accurate social perception at zero acquaintance: the affordances of a Gibsonian approach. Pers Soc Psychol Rev 1: 204–223.
  45. 45. Zebrowitz LA (1996) Physical appearance as a basis for stereotyping. In: Macrae CN, Hewstone M, Stangor C, editors. Foundation of stereotypes and stereotyping. New York: Guilford Press.
  46. 46. Secord PF (1958) Facial features and inference processes in interpersonal perception. In: Tagiuri RP, L, editor. Person Perception and Interpersonal Behaviour: Stanford University Press. pp. 300–315.
  47. 47. Verosky SC, Todorov A (2010) Generalization of affective learning about faces to perceptually similar faces. Psychol Sci 21: 779–785.
  48. 48. Said CP, Sebe N, Todorov A (2009) Structural resemblance to emotional expressions predicts evaluation of emotionally neutral faces. Emotion 9: 260–264.
  49. 49. Zebrowitz LA, Kikuchi M, Fellous JM (2010) Facial resemblance to emotions: group differences, impression effects, and race stereotypes. Journal of Personality and Social Psychology 98: 175–189.
  50. 50. Apicella CL, Feinberg DR (2009) Voice pitch alters mate-choice-relevant perception in hunter-gatherers. P Roy Soc B-Biol Sci 276: 1077–1082.
  51. 51. Ferdenzi C, Patel S, Mehu-Blantar I, Khidasheli M, Sander D, et al. (2013) Voice attractiveness: Influence of stimulus duration and type. Behavior research methods 45: 405–413.
  52. 52. Belin P, Fecteau S, Bedard C (2004) Thinking the voice: neural correlates of voice perception. Trends Cogn Sci 8: 129–135.
  53. 53. Bruce V, Young A (1986) Understanding face recognition. Br J Psychol 77 (Pt 3): 305–327.
  54. 54. Germine L, Nakayama K, Duchaine BC, Chabris CF, Chatterjee G, et al. (2012) Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/perceptual experiments. Psychon B Rev 19: 847–857.
  55. 55. Horton JJ, Rand DG, Zeckhauser RJ (2011) The online laboratory: conducting experiments in a real labor market. Exp Econ 14: 399–425.
  56. 56. Zuckerman M, Hodgins H, Miyake K (1990) The Vocal Attractiveness Stereotype - Replication and Elaboration. J Nonverbal Behav 14: 97–112.
  57. 57. Titze IR (1989) Physiologic and Acoustic Differences between Male and Female Voices. J Acoust Soc Am 85: 1699–1707.
  58. 58. Boersma P, Weenick D (2001) Praat, a system for doing phonetics by computer. Glot International 5: 341–345.
  59. 59. Baumann O, Belin P (2010) Perceptual scaling of voice identity: common dimensions for different vowels and speakers. Psychological Research 74: 110–120.
  60. 60. Kreiman J, Gerratt BR, Kempster GB, Erman A, Berke GS (1993) Perceptual Evaluation of Voice Quality - Review, Tutorial, and a Framework for Future-Research. J Speech Hear Res 36: 21–40.
  61. 61. Bruckert L, Lienard JS, Lacroix A, Kreutzer M, Leboucher G (2006) Women use voice parameters to assess men's characteristics. P Roy Soc B-Biol Sci 273: 83–89.
  62. 62. Fitch WT (1997) Vocal tract length and formant frequency dispersion correlate with body size in rhesus macaques. J Acoust Soc Am 102: 1213–1222.
  63. 63. Feinberg DR, Jones BC, Little AC, Burt DM, Perrett DI (2005) Manipulations of fundamental and formant frequencies influence the attractiveness of human male voices. Anim Behav 69: 561–568.
  64. 64. Patel S, Scherer KR, Bjorkner E, Sundberg J (2011) Mapping emotions into acoustic space: the role of voice production. Biol Psychol 87: 93–98.
  65. 65. Latinus M, McAleer P, Bestelmeyer PEG, Belin P (2013) Norm-Based Coding of Voice Identity in Human Auditory Cortex. Current Biology 23: 1075–1080.
  66. 66. Lewis JW, Talkington WJ, Walker NA, Spirou GA, Jajosky A, et al. (2009) Human Cortical Organization for Processing Vocalizations Indicates Representation of Harmonic Structure as a Signal Attribute. J Neurosci 29: 2283–2296.
  67. 67. Berry DS (1992) Vocal Attractiveness and vocal babyishness: Effects on stranger, self, and friend impressions. J Nonverbal Behav 14: 141–153.
  68. 68. Apicella CL, Feinberg DR, Marlowe FW (2007) Voice pitch predicts reproductive success in male hunter-gatherers. Biology Lett 3: 682–684.
  69. 69. Rhodes G, Lie HC, Thevaraja N, Taylor L, Iredell N, et al. (2011) Facial Attractiveness Ratings from Video-Clips and Static Images Tell the Same Story. Plos One 6: e26653.
  70. 70. Zebrowitz LA, Wang RX, Bronstad PM, Eisenberg D, Undurraga E, et al. (2012) First Impressions From Faces Among U.S. and Culturally Isolated Tsimane' People in the Bolivian Rainforest. J Cross Cult Psychol 43: 119–134.
  71. 71. Young AW, Bruce V (2011) Understanding person perception. Br J Psychol 102: 959–974.
  72. 72. Pivonkova V, Rubesova A, Lindova J, Havlicek J (2011) Sexual Dimorphism and Personality Attributions of Male Faces. Arch Sex Behav 40: 1137–1143.
  73. 73. Hughes SM, Rhodes G (2010) Making age assessments based on voice: The impact of the reproductive viability of the speaker. Journal of Social, Evolutionary, & Cultural Psychology 4: 290–304.
  74. 74. Langlois JH, Roggman LA (1990) Attractive Faces Are Only Average. Psychological Science 1: 115–121.
  75. 75. Jones BC, DeBruine LM, Little AC (2007) The role of symmetry in attraction to average faces. Percept Psychophys 69: 1273–1277.
  76. 76. Little AC, Debruine LM, Jones BC (2013) Sex Differences in Attraction to Familiar and Unfamiliar Opposite-Sex Faces: Men Prefer Novelty and Women Prefer Familiarity. Arch Sex Behav.
  77. 77. Little AC, Roberts SC, Jones BC, DeBruine LM (2012) The perception of attractiveness and trustworthiness in male faces affects hypothetical voting decisions differently in wartime and peacetime scenarios. Q J Exp Psychol 65: 2018–2032.
  78. 78. Leopold DA, O'Toole AJ, Vetter T, Blanz V (2001) Prototype-referenced shape encoding revealed by high-level aftereffects. Nat Neurosci 4: 89–94.
  79. 79. Latinus M, Belin P (2011) Anti-voice adaptation suggests prototype-based coding of voice identity. Front Psychol 2: 175.
  80. 80. Bestelmeyer PE, Latinus M, Bruckert L, Rouger J, Crabbe F, et al. (2011) Implicitly Perceived Vocal Attractiveness Modulates Prefrontal Cortex Activity. Cereb Cortex 22: 1263–1270.
  81. 81. Ohala JJ (1984) An Ethological Perspective on Common Cross-Language Utilization of F0 of Voice. Phonetica 41: 1–16.
  82. 82. Schotz S (2007) Acoustic analysis of adult speaker age. Speaker Classification I: Springer Berlin Heidelberg. pp. 88–107.
  83. 83. Ferrand CT (2002) Harmonics-to-noise ratio: An index of vocal aging. J Voice 16: 480–487.
  84. 84. Evans S, Neave N, Wakelin D (2006) Relationships between vocal characteristics and body size and shape in human males: An evolutionary explanation for a deep male voice. Biological Psychology 72: 160–163.
  85. 85. Hughes SM, Harrison MA, Gallup GG (2009) Sex-specific body configurations can be estimated from voice samples. Journal of Social, Evolutionary, & Cultural Psychology 3: 343–355.
  86. 86. Krauss RM, Freyberg R, Morsella E (2002) Inferring speakers' physical attributes from their voices. J Exp Soc Psychol 38: 618–625.
  87. 87. Lass NJ, Colt EG (1980) A Comparative-Study of the Effect of Visual and Auditory Cues on Speaker Height and Weight Identification. J Phonetics 8: 277–285.
  88. 88. van Dommelen WA (1993) Speaker height and weight identification: A re-evaluation of some old data. J Phonetics 21: 337–341.
  89. 89. van Dommelen WA, Moxness BH (1995) Acoustic parameters in speaker height and weight identification: sex-specific behaviour. Lang Speech 38 (Pt 3): 267–287.
  90. 90. Collins SA (2000) Men's voices and women's choices. Anim Behav 60: 773–780.
  91. 91. Rendall D, Vokey JR, Nemeth C (2007) Lifting the curtain on the Wizard of Oz: Biased voice-based impressions of speaker size. J Exp Psychol Human 33: 1208–1219.
  92. 92. Fitch WT, Giedd J (1999) Morphology and development of the human vocal tract: a study using magnetic resonance imaging. The Journal of the Acoustical Society of America 106: 1511–1522.
  93. 93. Puts DA, Hodges CR, Cardenas RA, Gaulin SJC (2007) Men's voices as dominance signals: vocal fundamental and formant frequencies influence dominance attributions among men. Evol Hum Behav 28: 340–344.
  94. 94. Vannoni E, McElligott AG (2008) Low Frequency Groans Indicate Larger and More Dominant Fallow Deer (Dama dama) Males. Plos One 3: e3113.
  95. 95. Grammer K, Thornhill R (1994) Human (Homo-Sapiens) Facial Attractiveness and Sexual Selection - the Role of Symmetry and Averageness. J Comp Psychol 108: 233–242.
  96. 96. Kawahara H, Matsui H (2003) Auditory morphing based on an elastic perceptual distance metric in an interference-free time-frequency representation. 2003 Ieee International Conference on Acoustics, Speech, and Signal Processing, Vol I, Proceedings: 256–259.
  97. 97. Olivola CY, Todorov A (2010) Fooled by first impressions? Reexamining the diagnostic value of appearance-based inferences. J Exp Soc Psychol 46: 315–324.
  98. 98. Funder DC (2012) Accurate Personality Judgment. Curr Dir Psychol Sci 21: 177–182.
  99. 99. Page RA, Balloun JL (1978) Effect of Voice Volume on Perception of Personality. J Soc Psychol 105: 65–72.
  100. 100. Aronovitch CD (1976) The voice of personality: stereotyped judgments and their relation to voice quality and sex of speaker. The Journal of social psychology 99: 207–220.
  101. 101. Lass NJ (1978) Correlational study of speakers' heights, weights, body surface areas, and speaking fundamental frequencies. The Journal of the Acoustical Society of America 63: 1218–1220.
  102. 102. Ko SJ, Judd CM, Blair IV (2006) What the voice reveals: within- and between-category stereotyping on the basis of voice. Pers Soc Psychol Bull 32: 806–819.
  103. 103. Berry DS (1991) Accuracy in social perception: contributions of facial and vocal information. Journal of Personality and Social Psychology 61: 298–307.
  104. 104. Yao B, Scheepers C (2011) Contextual modulation of reading rate for direct versus indirect speech quotations. Cognition 121: 447–453.
  105. 105. Yao B, Belin P, Scheepers C (2012) Brain 'talks over' boring quotes: top-down activation of voice-selective areas while listening to monotonous direct speech quotations. NeuroImage 60: 1832–1842.
  106. 106. Rhodes G, Lie HC, Thevaraja N, Taylor L, Iredell N, et al. (2011) Facial attractiveness ratings from video-clips and static images tell the same story. Plos One 6: e26653.