Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation

Abstract

Individuals who encounter false information on social media may actively spread it further, by sharing or otherwise engaging with it. Much of the spread of disinformation can thus be attributed to human action. Four studies (total N = 2,634) explored the effect of message attributes (authoritativeness of source, consensus indicators), viewer characteristics (digital literacy, personality, and demographic variables) and their interaction (consistency between message and recipient beliefs) on self-reported likelihood of spreading examples of disinformation. Participants also reported whether they had shared real-world disinformation in the past. Reported likelihood of sharing was not influenced by authoritativeness of the source of the material, nor indicators of how many other people had previously engaged with it. Participants’ level of digital literacy had little effect on their responses. The people reporting the greatest likelihood of sharing disinformation were those who thought it likely to be true, or who had pre-existing attitudes consistent with it. They were likely to have previous familiarity with the materials. Across the four studies, personality (lower Agreeableness and Conscientiousness, higher Extraversion and Neuroticism) and demographic variables (male gender, lower age and lower education) were weakly and inconsistently associated with self-reported likelihood of sharing. These findings have implications for strategies more or less likely to work in countering disinformation in social media.

Introduction

Disinformation is currently a critically important problem in social media and beyond. Typically defined as “the deliberate creation and sharing of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm, or for political, personal or financial gain”, political disinformation has been characterized as a significant threat to democracy [1, p.10]. It forms part of a wider landscape of information operations conducted by governments and other entities [2, 3]. Its intended effects include political influence, increasing group polarisation, reducing trust, and generally undermining civil society [4]. Effects are not limited to online processes. They regularly spill over into other parts of our lives. Experimental work has shown that exposure to disinformation can lead to attitude change [5] and there are many real-world examples of behaviours that have been directly attributed to disinformation, such people as attacking telecommunications masts in response to fake stories about ‘5G causing coronavirus’ [6, 7]. Social media disinformation is very widely used as a tool of influence: computational propaganda has been described as a pervasive and ubiquitous part of modern everyday life [8].

How does social media disinformation spread?

Once disinformation has initially been seeded online by its creators, one of the ways in which it spreads is through the actions of individual social media users. Ordinary people may propagate the material to their own social networks through deliberate sharing–a core function of platforms such as Facebook and Twitter. Other interactions with it, such as ‘liking’, also trigger the algorithms of social media platforms to display it to other users. This is a phenomenon known as ‘organic reach’ [9]. It can lead to false information spreading exponentially. As an example, analysis of the activity of the Russian ‘Internet Research Agency’ (IRA) disinformation group in the USA between 2015 and 2017 concluded that over 30 million users shared and otherwise interacted with the IRA’s Facebook and Instagram posts, propagating them to their families and friends [4]. There is evidence that false material is spread widely and rapidly through social media due to such human behaviour [10].

Why do people spread social media disinformation?

When individuals share or interact with disinformation they see online, they have essentially been persuaded to do so by its originators. Influential models of social information processing suggest there are different routes to persuasion [e.g. 11]. Under some circumstances, we may carefully consider the information available. At other times, we make rapid decisions based on heuristics and peripheral cues. When sharing information on social media occurs, it is likely to be spontaneous and rapid, rather than being a considered action that people spend time deliberating over. For example, there are indications of people using the interaction features of Facebook in a relatively unthinking and automatic manner [12]. In such situations, a peripheral route to persuasion is likely be important [13]. Individuals’ choices to share, like and so on will thus be guided primarily by heuristics or contextual cues [14].

Three potentially important heuristics in this context are consistency, consensus and authority [15]. These are not the only heuristics that might possibly influence whether we share false material. However, in each case there is suggestive empirical evidence, and apparent real-world attempts to leverage these phenomena, that make them worth considering.

Consistency.

Consistency is the extent to which sharing would be consistent with past behaviours or beliefs of the individual. For example, in the USA people with a history of voting Republican might be more likely to endorse and disseminate right-wing messaging [16]. There is a large body of work based on the idea that people prefer to behave in ways consistent with their attitudes [17]. Research has indicated that social media users consider headlines consistent with their pre-existing beliefs as more credible, even when explicitly flagged as being false [18]. In the context of disinformation, this could make it desirable to target audiences sympathetic to the message content.

Consensus.

Consensus is the extent to which people think their behaviour would be consistent with that of most other people. In the current context, it is possible that seeing a message has already been shared widely might make people more likely to forward it on themselves. In marketing, this influence tactic is known as ‘social proof’ [19]. It is widely used in online commerce in attempts to persuade consumers to purchase goods or services (e.g. by displaying reviews or sales rankings). The feedback mechanisms of social networks can be manipulated to create an illusion of such social support, and this tactic seems to have been used in the aftermath of terror attacks in the UK [20].

Bot networks are used to spread low-credibility information on Twitter through automated means. Bots have been shown to be involved in the rapid spread of information, tweeting and retweeting messages many times [21]. Among humans who see the messages, the high retweet counts achieved through the bot networks might be interpreted as indicating that many other people agree with them. There is evidence which suggests that "each amount of sharing activity by likely bots tends to trigger a disproportionate amount of human engagement" [21, p.4]. Such bot activity could be an attempt to exploit the consensus effect.

It is relatively easy to manipulate the degree of consensus or social proof associated with an online post. Work by the NATO Strategic Communications Centre of Excellence [22] indicated that it was very easy to purchase high levels of false engagement for social media posts (e.g. sharing of posts by networks of fake accounts) and that there was a significant black market for social media manipulation. Thus, if boosting consensus effectively influences organic reach, then it could be a useful tool for both those seeding disinformation and those seeking to spread counter-messages.

Authority.

Authority is the extent to which the communication appears to come from a credible, trustworthy source [23]. Research participants have been found to report a greater likelihood of propagating a social media message if it came from a trustworthy source [24]. There is evidence of real-world attempts to exploit this effect. In 2018, Twitter identified fraudulent accounts that simulated those of US local newspapers [25], which may be trusted more than national media [26]. These may have been sleeper accounts established specifically for the purpose of building trust prior to later active use.

Factors influencing the spread of disinformation.

While there are likely to be a number of other variables that also influence the spread of disinformation, there are grounds for believing that consistency, consensus and authority may be important. Constructing or targeting disinformation messages in such a way as to maximise these three characteristics may be a way to increase their organic reach. There is real-world evidence of activity consistent with attempts to exploit them. If these effects do exist, they could also be exploited by initiatives to counter disinformation.

Who spreads social media disinformation?

Not all individuals who encounter untrue material online spread it further. In fact, the great majority do not. Research linking behavioural and survey data [16] found that less than 10% of participants shared articles from ‘fake news’ domains during the 2016 US presidential election campaign (though of course when extrapolated to the huge user base of social network platforms like Facebook, this is still a very large number of people).

The fact that only a minority of people actually propagate disinformation makes it important to consider what sets them apart from people who don’t spread untrue material further. This will help to inform interventions aimed at countering disinformation. For example, those most likely to be misled by disinformation, or to spread it further, could be targeted with counter-messaging. It is known that the originators of disinformation have already targeted specific demographic groups, in the same way as political campaigns micro-target messaging at those audience segments deemed most likely to be persuadable [27]. For example, it is believed that the ‘Internet Research Agency’ sought to segment Facebook and Instagram users based on race, ethnicity and identity by targeting their messaging to people recorded by the platforms as having certain interests for marketing purposes [4]. They targeted specific communications tailored to those segments (e.g. trying to undermine African Americans’ faith in political processes and suppress their voting in the US presidential election).

Digital media literacy.

Research has found that older adults, especially those aged over 65, were by far the most likely to spread material originally published by ‘fake news’ domains [16]. A key hypothesis advanced to explain this is that older adults have lower levels of digital media literacy, and are thus less likely to be able to distinguish between true and false information online. While definitions may vary, digital media literacy can be thought of as including “… the ability to interact with textual, sound, image, video and social medias … finding, manipulating and using such information” [28, p. 11] and being a “multidimensional concept that comprised technical, cognitive, motoric, sociological, and emotional aspects” [29, p.834]. Digital media literacy is widely regarded as an important variable mediating the spread and impact of disinformation [e.g. 1]. It is argued that many people lack the sophistication to detect a message as being untruthful, particularly when it appears to come from an authoritative or trusted source. Furthermore, people higher in digital media literacy may be more likely to engage in elaborated, rather than heuristic-driven, processing (cf. work on phishing susceptibility [30]), and thus be less susceptible to biases such as consistency, consensus and authority.

Educating people in digital media literacy is the foundation of many anti-disinformation initiatives. Examples include the ‘News Hero’ Facebook game developed by the NATO Strategic Communications Centre of Excellence (https://www.stratcomcoe.org/news-hero), government initiatives in Croatia and France [8] or the work of numerous fact-checking organisations. The effectiveness of such initiatives relies on two assumptions being met. The first is that lower digital media literacy really does reduce our capacity to identify disinformation. There is currently limited empirical evidence on this point, complicated by the fact that definitions of ‘digital literacy’ are varied and contested, and there are currently no widely accepted measurement tools [28]. The second is that the people sharing disinformation are doing so unwittingly, having been tricked into spreading it. However, it is possible that at least some people know the material is untrue, and they spread it anyway. Survey research [31] has found that believing a story was false was not necessarily a barrier to sharing it. People may act like this because they are sympathetic to a story’s intentions or message, or they are explicitly signalling their social identity or allegiance to some political group or movement. If people are deliberately forwarding information that they know is untrue, then raising their digital media literacy would be ineffective as a stratagem to counter disinformation. This makes it important to simultaneously consider users’ beliefs about the veracity of disinformation stories, to inform the design of countermeasures.

Personality.

It is also known that personality influences how people use social media [e.g. 32]. This makes it possible that personality variables will also influence interactions with disinformation. Indeed, previous research [24] found that people low on Agreeableness reported themselves as more likely to propagate a message. This is an important possibility to consider, because it raises the prospect that individuals could be targeted on the basis of their personality traits with either disinformation or counter-messaging. In a social media context, personality-based targeting of communications is feasible because personality characteristics can be detected from individuals’ social media footprints [33, 34]. Large scale field experiments have shown that personality-targeted advertising on social media can influence user behaviour [35].

The question of which personality traits might be important is an open one. In the current study, personality was approached on an exploratory basis, with no specific hypotheses about effects or their directions. This is because there are a number of different and potentially rival effects that might operate. For example, higher levels of Conscientiousness may be associated with a greater likelihood of posting political material in social media [36] leading to a higher level of political disinformation being shared. However, people higher in Conscientiousness are likely to be more cautious [37] and pay more attention to details [38]. They might therefore also be more likely to check the veracity of the material they share, leading to a lower level of political disinformation being shared.

Research aims and hypotheses

The overall aim of this project was to establish whether contextual factors in the presentation of disinformation, or characteristics of the people seeing it, make it more likely that they extend its reach. The methodology adopted was scenario-based, with individuals being asked to rate their likelihood of sharing exemplar disinformation messages. A series of four studies was conducted, all using the same methodology. Multiple studies were used to establish whether the same effects were found across different social media platforms (Facebook in Study 1, Twitter in Study 2, Instagram in Study 3) and countries (Facebook with a UK sample in Study 1, Facebook with a US sample in Study 4). Data were also collected on whether participants had shared disinformation in the past. A number of distinct hypotheses were advanced:

H1: Individuals will report themselves as more likely to propagate messages from more authoritative compared to less authoritative sources.

H2: Individuals will report themselves as more likely to propagate messages showing a higher degree of consensus compared to those showing a lower degree of consensus.

H3: Individuals will report themselves as more likely to propagate messages consistent with their pre-existing beliefs compared to inconsistent messages.

H4: Individuals lower in digital literacy will report a higher likelihood of sharing false messages than individuals higher in digital literacy.

Other variables were included in the analysis on an exploratory basis with no specific hypotheses being advanced. In summary, this project asks why ordinary social media users share political disinformation messages they see online. It tests whether specific characteristics of messages or their recipients influence the likelihood of disinformation being further shared online. Understanding any such mechanisms will both increase our understanding of the phenomenon and inform the design of interventions seeking to reduce its impact.

Study 1

Study 1 tested hypotheses 1–4 with a UK sample, using stimuli relevant to the UK. The study was completed online. Participants were members of research panels sourced through the research company Qualtrics.

Method

Participants were asked to rate their likelihood of sharing three simulated Facebook posts. The study used an experimental design, manipulating levels of authoritativeness and consensus apparent in the stimuli. All manipulations were between, not within, participants. Consistency with pre-existing beliefs was not manipulated. Instead, the political orientation of the stimuli was held constant, and participants’ scores on conservative political orientation were used as an index of consistency between messages and participant beliefs. The effects of these variables on self-rated likelihood of sharing the stimuli, along with those of a number of other predictors, were assessed using multiple regression. The primary goal of the analysis was to identify variables that statistically significantly explained variance in the likelihood of sharing disinformation. The planned analysis was followed by supplementary and exploratory analyses. All analyses were conducted using SPSS v.25 for Mac. For all studies reported in this paper, ethical approval came from both the University of Westminster Research Ethics Committee (ETH1819-1420) and the Lancaster University Security Research Ethics Committee (BUCHANAN 2019 07 23). Consent was obtained, via an electronic form, from anonymous participants.

Materials.

A short questionnaire was used to capture demographic information (gender; country of residence; education; age; occupational status; political orientation expressed as right, left or centre; frequency of Facebook use). Individual differences in personality, political orientation, and digital / new media literacy were measured using established validated questionnaires. Ecologically valid stimuli were used, with their presentation being modified across conditions to vary authoritativeness and consensus markers.

Personality was measured using a 41-item Five-Factor personality questionnaire [38] derived from the International Personality Item Pool [37]. The measure provides indices of Extraversion, Neuroticism, Openness to Experience, Agreeableness and Conscientiousness that correlate well with the domains of Costa and McCrae's [39] Five Factor Model.

Conservatism was measured using the 12-item Social and Economic Conservatism Scale (SECS) [40], which is designed to measure political orientation along a left-right; liberal-conservative continuum. It was developed and validated using a US sample. In pilot work for the current study, mean scores for individuals who reported voting for the Labour and Conservative parties in the 2017 UK general election were found to differ in the expected manner (t(28) = -2.277, p = .031, d = 0.834). This provides evidence of its appropriateness for use in UK samples. While the measure provides indices of different aspects of conservatism, it also provides an overall conservatism score which was used in this study.

Digital media literacy was measured using the 35-item New Media Literacy Scale (NMLS) [29]. This is a theory-based self-report measure of competences in using, critically interrogating, and creating digital media technologies and messaging. In pilot work with a UK sample, it was found to distinguish between individuals high or low in social media (Twitter) use, providing evidence of validity (t(194) = -3.847, p < .001, d = .55). While the measure provides indices of different aspects of new media literacy, it also provides an overall score which was used in this study.

Participants were asked to rate their likelihood of sharing three genuine examples of ‘fake news’ that had been previously published online. An overall score for their likelihood of sharing the stimuli was obtained by summing the three ratings, creating a combined score. This was done, and a set of three stimuli was used, to reduce the likelihood that any effects found were peculiar to a specific story. The stimuli were sourced from the website Infowars.com (which in some cases had republished them from other sources). Infowars.com has been described [41] as a high-exposure site strongly associated with the distribution of ‘fake news’. Rather than full articles, excerpts (screenshots) were used that had the size and general appearance of what respondents might expect to see on social media sites. The excerpts were edited to remove any indicators of the source, metrics such as the numbers of shares, date, and author. All had a right-wing orientation (so that participant conservatism could be used as a proxy for consistency between the messages and existing beliefs). This was established in pilot work rating their political orientation and likelihood of being shared. The three stories were among seven rated by a UK sample (N = 30) on an 11-point scale asking “To what extent do you think this post was designed to appeal to people with right wing (politically conservative) views?” anchored at “Very left wing oriented” and “Very right wing oriented”. All seven were rated statistically significantly above the politically-neutral midpoint of the scale. Of the three stimuli selected for use in this study, a one-sample t-test showed that the least right-wing was statistically significantly higher than the midpoint, (t(39) = 4.385, p < .001, d = 0.70).

One of the stimuli was a picture of masked and hooded men titled “Censored video: watch Muslims attack men, women & children in England”. One was a picture of many people walking down a road, titled “Revealed: UN plan to flood America with 600 million migrants”, with accompanying text describing a plan to “flood America and Europe with hundreds of millions of migrants to maintain population levels”. The third was a picture of the Swedish flag titled “‘Child refugee’ with flagship Samsung phone and gold watch complains about Swedish benefits rules”, allegedly describing a 19 year-old refugee’s complaints.

The authoritativeness manipulation was achieved by pairing the stimuli with sources regarded as relatively high or low in authoritativeness. The source was shown above the stimulus being rated, in the same way as the avatar and username of someone who had posted a message would be on Facebook. The lower authoritativeness group were slight variants on real usernames that had previously retweeted either stories from Infowars.com or another story known to be untrue. The original avatars were used. The exemplars used in this study were named ‘Tigre’ (with an avatar of an indistinct picture of a female face), ‘jelly beans’ (a picture of some jelly beans) and ‘ChuckE’ (an indistinct picture of a male face). The higher authoritativeness group comprised actual fake accounts set up by the Internet Research Agency (IRA) group to resemble local news sources, selected from a list of suspended IRA accounts released by Twitter. The exemplars used in this study were ‘Los Angeles Daily’, ‘Chicago Daily News’ and ‘El Paso Top News’. Pilot work was conducted with a sample of UK participants (N = 30) who each rated a selection of 9 usernames, including these 6, for the extent to which each was “likely to be an authoritative source—that is, likely to be a credible and reliable source of information”. A within-subjects t-test indicated that mean authoritativeness ratings for the ‘higher’ group were statistically significantly higher than the ‘lower’ group (t(29) = -11.181, p < .001, dz = 2.04).

The consensus manipulation was achieved by pairing the stimuli with indicators of the number of shares and likes the story had. The indicators were shown below the stimulus being rated, in the same way as they normally would be on Facebook. In the low consensus conditions, low numbers of likes (1, 3, 2) and shares (2, 0, 2) were displayed. In the high consensus conditions, higher (but not unrealistic) numbers of likes (104K, 110K, 63K) and shares (65K, 78K, 95K) were displayed. The information was presented using the same graphical indicators as would be the case on Facebook, accompanied by the (inactive) icons for interacting with the post, in order to maximise ecological validity.

Procedure.

The study was conducted completely online, using materials hosted on the Qualtrics research platform. Participants initially saw an information page about the study, and on indicating their consent proceeded to the demographic items. They then completed the personality, conservatism and new media literacy scales. Each of these was presented on a separate page, except the NMLS which was split across three pages.

Participants were then asked to rate the three disinformation items. Participants were randomized to different combinations of source and story within their assigned condition. For example, Participant A might have seen Story 1 attributed to Source 1, Story 2 attributed to Source 2, and Story 3 attributed to Source 3; while Participant B saw Story 1 attributed to Source 2, Story 2 attributed to Source 1, and Story 3 attributed to Source 3. Each participant saw the same three stories paired with one combination of authoritativeness and consensus. There were 24 distinct sets of stimuli.

Each participant saw an introductory paragraph stating “A friend of yours recently shared this on Facebook, commenting that they thought it was important and asking all their friends to share it:”. Below this was the combination of source, story, and consensus indicators, presented together in the same way as a genuine Facebook post would be. They then rated the likelihood of them sharing the post to their own public timeline, on an 11-point scale anchored at ‘Very Unlikely’ and ‘Very Likely’. This was repeated for the second and third stimuli, each on a separate page. Having rated each one, participants were then shown all three stimuli again, this time on the same page. They were asked to rate each one for “how likely do you think it is that the message is accurate and truthful” and “how likely do you think it is that you have seen it before today”, on 5-point scales anchored at ‘Not at all likely’ and ‘Very likely’.

After rating the stimuli, participants were asked two further questions: “Have you ever shared a political news story online that you later found out was made up?”, and “And have you ever shared a political news story online that you thought AT THE TIME was made up?”, with ‘yes’ or ‘no’ response options. This question format directly replicated that used in Pew Research Centre surveys dealing with disinformation [e.g. 31].

Finally, participants were given the opportunity once again to give or withdraw their consent for participation. They then proceeded to a debriefing page. It was only at the debriefing stage that they were told the stories they had seen were untrue: no information about whether the stimuli were true or false had been presented prior to that point.

Data screening and processing.

Prior to delivery of the sample, Qualtrics performed a series of quality checks and ‘data scrubbing’ procedures to remove and replace participants with response patterns suggesting inauthentic or inattentive responding. These included speeding checks and examination of response patterns. On delivery of the initial sample (N = 688) further screening procedures were performed. Sixteen respondents were identified who had responded with the same scores to substantive sections of the questionnaire (‘straightlining’). These were removed, leaving N = 672. These checks and exclusions were carried out prior to any data analysis. Where participants had missing data on any variables, they were omitted only from analyses including those variables. Thus, Ns vary slightly throughout the analyses.

Participants.

The target sample size was planned to exceed N = 614, which would give 95% power to detect R2 = .04 (a benchmark for the minimum effect size likely to have real-world importance in social science research [42]), in the planned multiple regression analysis with 11 predictors. Qualtrics was contracted to provide a sample of Facebook users that was broadly representative of the UK 2011 census population in terms of gender; the split between those who had post-secondary-school education and those who had not; and age profile (18+). Quotas were used to assemble a sample comprising approximately one third each self-describing as left-wing, centre and right-wing in their political orientation. Participant demographics are shown in Table 1, column 1.

Results

Descriptive statistics for participant characteristics (personality, conservatism, new media literacy and age) and their reactions to the stimuli (likelihood of sharing, belief the stories were likely to be true, and rating of likelihood that they had seen them before) are summarised in Table 2. All scales had acceptable reliability. The main dependent variable, likelihood of sharing, had a very skewed distribution with a strong floor effect: 39.4% of the participants indicated they were ‘very unlikely’ to share any of the three stories they saw. This is consistent with findings on real-world sharing that indicate only a small proportion of social media users will actually share disinformation [e.g. 16], though it gives a dependent variable with less than ideal distributional properties.

thumbnail
Table 2. Descriptive statistics for participant characteristics and response variables, Study 1.

https://doi.org/10.1371/journal.pone.0239666.t002

To simultaneously test hypotheses 1–4 a multiple regression analysis was carried out. This evaluated the extent to which digital media literacy (NMLS), authority of the message source, consensus, belief in veracity of the messages, consistency with participant beliefs (operationalised as the total SECS conservatism scale score), age and personality (Extraversion, Conscientiousness, Agreeableness, Openness to Experience and Neuroticism), predicted self-rated likelihood of sharing the posts. This analysis is summarised in Table 3. Checks were performed on whether the dataset met the assumptions required by the analysis (absence of collinearity, independence of residuals, heteroscedasticity and non-normal distribution of residuals). Despite the skewed distribution of the dependent variable, no significant issues were detected.

thumbnail
Table 3. Predictors of rated likelihood of sharing stimuli, Study 1.

https://doi.org/10.1371/journal.pone.0239666.t003

However, exploratory analyses indicated that inclusion of other variables in the regression model might be warranted. It is well established that there are gender differences on a number of personality variables. Furthermore, in the current sample men and women differed in their level of conservatism (M = 669.10, SD = 150.68 and M = 636.50, SD = 138.31 respectively; t(666) = 2.914, p = .004), their self-rated likelihood of sharing (M = 10.41, SD = 8.33 and M = 7.60, SD = 6.38 respectively; t(589.60) = 4.928, p < .001; adjusted df used due to heterogeneity of variance, Levene’s F = 35.99, p < .001), and their belief that the stories were true (M = 7.16, SD = 3.22 and M = 6.52, SD = 3.12 respectively; t(668) = 2.574, p = .010). Education level was found to correlate positively with NMLS scores (r = .210, N = 651, p < .001). Level of Facebook use correlated significantly with age (r = -.126, N = 669, p = .001), education (r = .082, N = 671, p = .034), NMLS (r = .170, N = 652, p < .001), with likelihood of sharing (r = .079, N = 672, p = .040), and with likelihood of having seen the stimuli before (r = .107, N = 672, p = .006). Self-reported belief that respondents had seen the stories before also correlated significantly with likelihood of sharing (r = .420, N = 672, p < .001), and a number of other predictor variables.

Accordingly, a further regression analysis was performed, including these additional predictors (gender, education, level of Facebook use, belief they had seen the stories before). Given inclusion of gender as a predictor variable, the two respondents who did not report their gender as either male or female were excluded from further analysis. The analysis, summarised in Table 4, indicated that the model explained 43% of the variance in self-reported likelihood of sharing the three disinformation items. Neither the authoritativeness of the story source, nor consensus information associated with the stories, was a significant predictor.

thumbnail
Table 4. Predictors of rated likelihood of sharing stimuli (extended predictor set), Study 1.

https://doi.org/10.1371/journal.pone.0239666.t004

Consistency of the items with participant attitudes (conservatism) was important, with a positive and statistically significant relationship between conservatism and likelihood of sharing. The only personality variable predicting sharing was Agreeableness, with less agreeable people giving higher ratings of likelihood of sharing. In terms of demographic characteristics, gender and education were statistically significant predictors, with men and less-educated people reporting a higher likelihood of sharing. Finally, people reported a greater likelihood of sharing the items if they believed they were likely to be true, and if they thought they had seen them before.

Participants had also been asked about their historical sharing of untrue political stories, both unknowing and deliberate. 102 out of 672 participants (15.2%) indicated that they had ever ‘shared a political news story online that they later found out was made up’, while 64 out of 672 indicated they had shared one that they ‘thought AT THE TIME was made up’ (9.5%). Predictors of whether or not people had shared untrue material under both sets of circumstances were examined using logistic regressions, with the same sets of participant-level predictors.

Having unknowingly shared untrue material (Table 5) was significantly predicted by lower Conscientiousness, lower Agreeableness, and lower age. Having shared material known to be untrue at the time (Table 6) was significantly predicted by lower Agreeableness and lower age.

thumbnail
Table 5. Binary logistic regression: Predictors of whether participants had previously shared political stories they subsequently discovered were false, Study 1.

https://doi.org/10.1371/journal.pone.0239666.t005

thumbnail
Table 6. Binary logistic regression: Predictors of whether participants had previously shared political stories they knew at the time were false, Study 1.

https://doi.org/10.1371/journal.pone.0239666.t006

Discussion

The main analysis in this study (Table 4) provided limited support for the hypotheses. Contrary to hypotheses 1, 2, and 4, neither consensus markers, authoritativeness of source, nor new media literacy were associated with self-rated likelihood of sharing the disinformation stories. However, in line with hypothesis 3, higher levels of conservatism were associated with higher likelihood of sharing disinformation. This finding supports the proposition that we are more likely to share things that are consistent with our pre-existing beliefs, as all the stimuli were right-wing in orientation. An alternative explanation might be that more conservative people are simply more likely to share disinformation. However, as well as lacking a solid rationale, this explanation is not supported by the fact that conservatism did not seem to be associated with self-reported historical sharing (Tables 5 and 6).

The strongest predictors of likelihood of sharing were belief that the stories were true, and likelihood of having seen them before. Belief in the truth of the stories provides further evidence for the role of consistency (hypothesis 3), in that we are more likely to share things we believe are true. The association with likely previous exposure to the materials is consistent with other recent research [43, 44] that found that prior exposure to ‘fake news’ headlines led to higher belief in their accuracy and reduced belief that it would be unethical to share them.

Of the personality variables, only Agreeableness was a significant predictor, with less agreeable people rating themselves are more likely to share the stimuli. This is consistent with previous findings [24] that less agreeable people reported they were more likely to share a critical political message.

Lower education levels were associated with a higher self-reported likelihood of sharing. It is possible that less educated people may be more susceptible to online influence, given work finding that less educated people were more influenced by micro-targeted political advertising on Facebook [45].

Finally, gender was found to be an important variable, with men reporting a higher likelihood of sharing the disinformation messages than women. This was unanticipated: while there are a number of gender-related characteristics (e.g. personality traits) that were thought might be important, there were no a priori grounds to expect that gender itself would be a predictor variable.

Study 1 also examined predictors of reported historical sharing of false political information. Consistent with real-world data [16], and past representative surveys [e.g. 31], a minority of respondents reported such past sharing. Unknowingly sharing false political stories was predicted by low Conscientiousness, low Agreeableness, and lower age, while knowingly sharing false material was predicted only by lower Agreeableness and lower age. The effect of Agreeableness is consistent with the findings from the main analysis and from [24]. The finding that Conscientiousness influenced accidental, but not deliberate, sharing is consistent with the idea that less conscientious people are less likely to check the details or veracity of a story before sharing it. Clearly this tendency would not apply to deliberate sharing of falsehoods. The age effect is harder to explain, especially given evidence [16] that older people were more likely to share material from fake news sites. One possible explanation is that younger people are more active on social media, so would be more likely to share any kind of article. Another possibility is that they are more likely to engage in sharing humorous political memes, which could often be classed as false political stories.

Study 2

Study 2 set out to repeat Study 1, but presented the materials as if they had been posted on Twitter rather than Facebook. The purpose of this was to test whether the observed effects applied across different platforms. Research participants have reported using ‘likes’ on Twitter in a more considered manner than on Facebook [12], raising the possibility that heuristics might be less important for this platform. The study was completed online, using paid respondents sourced from the Prolific research panel (www.prolific.co).

Method

The methodology exactly replicated that of Study 1, except in the case of details noted below. The planned analysis was revised to include the expanded set of predictors eventually used in Study 1 (see Table 4).

Materials.

Measures and materials were the same as used in Study 1. The key difference from Study 1 was in the presentation of the three stimuli, which were portrayed as having been posted to Twitter rather than Facebook. For the authoritativeness manipulation, the screen names of the sources were accompanied by @usernames, as is conventional on Twitter. For the consensus manipulation, ‘retweets’ were displayed rather than ‘shares’, and the appropriate icons for Twitter were used. Participants also indicated their level of Twitter, rather than Facebook, use.

Procedure.

The procedure replicated Study 1, save that in this case the NMLS was presented on a single page. Before participants saw each of the three disinformation items, the introductory paragraph stated “A friend of yours recently shared this on Twitter, commenting that they thought it was important and asking all their friends to retweet it:”, and they were asked to indicate the likelihood of them ‘retweeting’ rather than ‘sharing’ the post.

Data screening and processing.

Data submissions were initially obtained from 709 participants. A series of checks were performed to ensure data quality, resulting in a number of responses being excluded. One individual declined consent. Eleven were judged to have responded inauthentically, with the same responses to all items in substantive sections of the questionnaire (‘straightlining’). Twenty were not active Twitter users: three individuals visited Twitter ‘not at all’ and seventeen ‘less often’ than every few weeks. Three participants responded unrealistically quickly, with response durations shorter than four minutes (the same value used as a speeding check by Qualtrics in Study 1). All of these respondents were removed, leaving N = 674. These checks and exclusions were carried out prior to any data analysis.

Participants.

The target sample size was planned to exceed N = 614, as in Study 1. No attempt was made to recruit a demographically representative sample: instead, sampling quotas were used to ensure the sample was not homogenous with respect to education (pre-degree vs. undergraduate degree or above), age (under 40 vs. over 40) and political preference (left, centre or right wing orientation). Additionally, participants had to be UK nationals resident in the UK; active Twitter users; and not participants in prior studies related to this one. Each participant received a reward of £1.25. Participant demographics are shown in Table 1 (column 2). For the focal analysis in this study, the sample size conferred 94.6% power to detect R2 = .04 in a multiple regression with 15 predictors (2-tailed, alpha = .05).

Results

Descriptive statistics are summarised in Table 7. All scales had acceptable reliability. The main dependent variable, likelihood of sharing, again had a very skewed distribution with a strong floor effect.

thumbnail
Table 7. Descriptive statistics for participant characteristics and response variables, Study 2.

https://doi.org/10.1371/journal.pone.0239666.t007

To simultaneously test hypotheses 1–4, a multiple regression analysis was carried out using the expanded predictor set from Study 1. Given inclusion of gender as a predictor variable, the three respondents who did not report their gender as either male or female were excluded from further analysis. The analysis, summarised in Table 8, indicated that the model explained 46% of the variance in self-reported likelihood of sharing the three disinformation items. Neither the authoritativeness of the story source, nor consensus information associated with the stories, nor new media literacy, was a significant predictor. Consistency of the items with participant attitudes (conservatism) was important, with a positive and statistically significant relationship between conservatism and likelihood of sharing. No personality variable predicted ratings of likelihood of sharing. In terms of demographic characteristics, gender and education were statistically significant predictors, with men and less-educated people reporting a higher likelihood of sharing. Finally, people reported a greater likelihood of sharing the items if they believed they were likely to be true, and if they thought they had seen them before.

thumbnail
Table 8. Predictors of rated likelihood of retweeting stimuli (extended predictor set), Study 2.

https://doi.org/10.1371/journal.pone.0239666.t008

Participants had also been asked about their historical sharing of untrue political stories, both unknowing and deliberate. 102 out of 674 participants (15.1%) indicated that they had out ever ‘shared a political news story online that they later found out was made up’, while 42 out of 674 indicated they had shared one that they ‘thought AT THE TIME was made up’ (6.2%). Predictors of whether or not people had shared untrue material under both sets of circumstances were examined using logistic regressions, with the same sets of participant-level predictors.

Having unknowingly shared untrue material (Table 9) was significantly predicted by higher Extraversion and higher levels of Twitter use. Having shared material known to be untrue at the time (Table 10) was significantly predicted by higher Neuroticism and being male.

thumbnail
Table 9. Binary logistic regression: Predictors of whether participants had previously shared political stories they subsequently discovered were false, Study 2.

https://doi.org/10.1371/journal.pone.0239666.t009

thumbnail
Table 10. Binary logistic regression: Predictors of whether participants had previously shared political stories they knew at the time were false, Study 2.

https://doi.org/10.1371/journal.pone.0239666.t010

Discussion

For the main analysis, Study 2 replicates a number of key findings from Study 1. In particular, hypotheses 1, 2 and 4 were again unsupported by the results: consensus, authoritativeness, and new media literacy were not associated with self-rated likelihood of retweeting the disinformation stories. Evidence consistent with hypothesis 3 was again found, with higher levels of conservatism being associated with higher likelihood of retweeting. Again, the strongest predictor of likelihood of sharing was belief that the stories were true, while likelihood of having seen them before was again statistically significant. The only difference was in the role of personality: there was no association between Agreeableness (or any other personality variable) and likelihood of retweeting the material.

However, for self-reports of historical sharing of false political stories, the pattern of results was different. None of the previous results were replicated, and new predictors were observed for both un-knowing and deliberate sharing. For unintentional sharing, the link with higher levels of Twitter use makes sense, as higher usage confers more opportunities to accidentally share untruths. Higher Extraversion has also been found to correlate with higher levels of social media use [32] so the same logic may apply for that variable. For intentional sharing, the finding that men were more likely to share false political information is similar to findings from Study 1. The link with higher Neuroticism is less easy to explain: one possibility is that more neurotic people are more likely to share falsehoods that will reduce the chances of an event that they worry about (for example, spreading untruths about a political candidate who one is worried about being elected).

Given that these questions asked about past behaviour in general, and were not tied to the Twitter stimuli used in this study, it is not clear why the pattern of results should have differed from those in Study 1. One possibility is that the sample characteristics were different (this sample was younger, better educated, and drawn from a different source). Another realistic possibility, especially given the typically low effect sizes and large samples tested, is that these are simply ‘crud’ correlations [46] rather than useful findings. Going forward, it is likely to be more informative to focus on results that replicate across multiple studies or conceptually similar analyses.

Study 3

Study 3 set out to repeat Study 1, but presented the materials as if they had been posted on Instagram rather than Facebook. Instagram presents an interesting contrast, as the mechanisms of engagement with material are different (for example there is no native sharing mechanism). Nonetheless, it has been identified as an important theater for disinformation operations [47]. Study 3 therefore sought to establish whether the same factors affecting sharing on Facebook also affect engagement with false material on Instagram. The study was completed online, using paid respondents sourced from the Prolific research panel.

Method

The methodology exactly replicated that of Study 1, except in the case of details noted below. The planned analysis was revised to include the expanded set of predictors eventually used in Study 1 (see Table 4).

Materials.

Measures and materials were the same as used in Study 1. The only difference from Study 1 was in the presentation of the three stimuli, which were portrayed as having been posted to Instagram rather than Facebook. For the consensus manipulation, ‘likes’ were used as the sole consensus indicator, and the appropriate icons for Instagram were used.

Procedure.

The procedure replicated Study 1, save that in this case the NMLS was presented on a single page. Before participants saw each of the three disinformation items, the introductory paragraph stated “Imagine that you saw this post on your Instagram feed:” and they were asked to indicate the probability of them ‘liking’ the post.

Data screening and processing.

Data submissions were initially obtained from 692 participants. A series of checks were performed to ensure data quality, resulting in a number of responses being excluded. Four individuals declined consent. Twenty-one were judged to have responded inauthentically, with the same scores to substantive sections of the questionnaire (‘straightlining’). Five did not indicate they were located in the UK. Ten were not active Instagram users: three individuals visited Instagram ‘not at all’ and seven ‘less often’ than every few weeks. Two participants responded unrealistically quickly, with response durations shorter than four minutes (the same value used as a speeding check by Qualtrics in Study 1). All of these respondents were removed, leaving N = 650. These checks and exclusions were carried out prior to any data analysis.

Participants.

The target sample size was planned to exceed N = 614, as in Study 1. No attempt was made to recruit a demographically representative sample: instead, sampling quotas were used to ensure the sample was not homogenous with respect to education (pre-degree vs. undergraduate degree or above) and political preference (left, centre or right-wing orientation). Sampling was not stratified by age, given that Instagram use is associated with younger ages, and the number of older Instagram users in the Prolific pool was limited at the time the study was carried out. Additionally, participants had to be UK nationals resident in the UK; active Instagram users; and not participants in prior studies related to this one. Each participant received a reward of £1.25. Participant demographics are shown in Table 1 (column 3). For the focal analysis in this study, the sample size conferred 93.6% power to detect R2 = .04 in a multiple regression with 15 predictors (2-tailed, alpha = .05).

Results

Descriptive statistics are summarised in Table 11. All scales had acceptable reliability. The main dependent variable, probability of liking, again had a very skewed distribution with a strong floor effect.

thumbnail
Table 11. Descriptive statistics for participant characteristics and response variables, Study 3.

https://doi.org/10.1371/journal.pone.0239666.t011

To simultaneously test hypotheses 1–4, a multiple regression analysis was carried out using the expanded predictor set from Study 1. Given inclusion of gender as a predictor variable, the three respondents who did not report their gender as either male or female were excluded from further analysis. The analysis, summarised in Table 12, indicated that the model explained 24% of the variance in self-reported likelihood of sharing the three disinformation items. Neither the authoritativeness of the story source, consensus information associated with the stories, nor consistency of the items with participant attitudes (conservatism) was a statistically significant predictor. Extraversion positively and Conscientiousness negatively predicted ratings of likelihood of sharing. In terms of demographic characteristics, men and younger participants reporting a higher likelihood of sharing. Finally, people reported a greater likelihood of sharing the items if they believed they were likely to be true, and if they thought they had seen them before.

thumbnail
Table 12. Predictors of rated likelihood of liking stimuli (extended predictor set), Study 3.

https://doi.org/10.1371/journal.pone.0239666.t012

Participants had also been asked about their historical sharing of untrue political stories, both unknowing and deliberate. Eighty five out of 650 (13.1%) participants who answered the question indicated that they had out ever ‘shared a political news story online that they later found out was made up’, while 50 out of 650 indicated they had shared one that they ‘thought AT THE TIME was made up’ (7.7%). Predictors of whether or not people had shared untrue material under both sets of circumstances were examined using logistic regressions, with the same sets of participant-level predictors.

Having unknowingly shared untrue material (Table 13) was significantly predicted by higher Extraversion, lower Conscientiousness and male gender. Having shared material known to be untrue at the time (Table 14) was significantly predicted by higher New Media Literacy, higher Conservatism, and higher Neuroticism.

thumbnail
Table 13. Binary logistic regression: Predictors of whether participants had previously shared political stories they subsequently discovered were false, Study 3.

https://doi.org/10.1371/journal.pone.0239666.t013

thumbnail
Table 14. Binary logistic regression: Predictors of whether participants had previously shared political stories they knew at the time were false, Study 3.

https://doi.org/10.1371/journal.pone.0239666.t014

Discussion

As in Studies 1 and 2, results were not consistent with hypotheses 1, 2 and 4: consensus, authoritativeness, and new media literacy were not associated with self-rated probability of liking the disinformation stories. In contrast to Studies 1 and 2, however, conservatism did not predict liking the stories. Belief that the stories were true was again the strongest predictor, while likelihood of having seen them before was again statistically significant. Among the personality variables, lower Agreeableness returned as a predictor of likely engagement with the stories, consistent with Study 1 but not Study 2. Lower age predicted likely engagement, a new finding, while being male predicted likely engagement as found in both in Study 1 and Study 2. Unlike Study 1 and Study 2, education had no effect.

With regard to historical accidental sharing, as in Study 3 higher Extraversion was a predictor, while as in Study 1 so was lower Conscientiousness. Men were more likely to have shared accidentally. Deliberate historical sharing was predicted by higher levels of New Media Literacy. This is counter-intuitive and undermines the argument that people share things because they know no better. In fact, in the context of deliberate deception, motivated individuals higher in digital literacy may actually be better equipped to spread untruths. Conservatism was also a predictor here. This could again be a reflection of the consistency hypothesis, given that there are high levels of conservative-oriented disinformation circulating. Finally, as in Study 3, higher Neuroticism predicted deliberate historical sharing.

Study 4

Study 4 set out to repeat Study 1, but with a US sample and using US-centric materials. The purpose of this was to test whether the observed effects applied across different countries. The study was completed online, using as participants members of research panels sourced through the research company Qualtrics.

Method

The methodology exactly replicated that of Study 1, except in the case of details noted below. The planned analysis was revised to include the expanded set of predictors eventually used in Study 1 (see Table 4).

Materials.

Measures and materials were the same as used in Study 1. The only difference from Study 1 was in the contents of the three disinformation exemplars, which were designed to be relevant to a US rather than UK audience. Two of the stimuli were sourced from the website Infowars.com, while a third was a story described as untrue by the fact-checking website Politifact.com. In the same way as in Study 1, the right-wing focus of the stories was again established in pilot work where a US sample (N = 40) saw seven stories including these and rated their political orientation and likelihood of being shared. All were rated above the mid-point of an 11-point scale asking “To what extent do you think this post was designed to appeal to people with right wing (politically conservative) views?” anchored at “Very left wing oriented” and “Very right wing oriented”. For the least right-wing of the three stories selected, a one-sample t-test comparing the mean rating with the midpoint of the scale showed it was statistically significantly higher, t(39) = 6.729, p < .001, d = 1.07). One of the stimuli, also used in Study 1–3, was titled “Revealed: UN plan to flood America with 600 million migrants”. One was titled “Flashback: Obama’s attack on internet freedom”, subtitled ‘Globalists, Deep State continually targeting America’s internet dominance’, featuring further anti- Obama, China and ‘Big Tech’ sentiment, and an image of Barack Obama apparently drinking wine with a person of East Asian appearance. The third was text based and featured material titled “Surgeon who exposed Clinton foundation corruption in Haiti found dead in apartment with stab wound to the chest”.

The materials used to manipulate authoritativeness (Facebook usernames shown as sources of the stories) were the same as used in Studies 1–3. These were retained because pilot work indicated that the higher and lower sets differed in authoritativeness for US audiences in the same way as for UK audiences. A sample of 30 US participants again each rated a selection of 9 usernames, including these 6, for the extent to which each was “likely to be an authoritative source—that is, likely to be a credible and reliable source of information”. A within-subjects t-test indicated that mean authoritativeness ratings for the ‘higher’ group were statistically significantly higher than the ‘lower’ group (t(29) = -9.355 p < .001, dz = 1.70).

Procedure.

The procedure replicated Study 1, save that in this case the NMLS was presented across two pages.

Data screening and processing.

Prior to delivery of the sample, Qualtrics performed a series of quality checks and ‘data scrubbing’ procedures to remove and replace participants with response patterns suggesting inauthentic or inattentive responding. These included speeding checks and examination of response patterns. On delivery of the initial sample (N = 660) further screening procedures were performed. Nine respondents were identified who had responded with the same scores to substantive sections of the questionnaire (‘straightlining’), and one who had not completed any of the personality items. Twelve respondents were not active Facebook users: Six reported using Facebook ‘not at all’ and a further six less often than ‘every few weeks’. All of these were removed, leaving N = 638. These checks and exclusions were carried out prior to any data analysis.

Participants.

The target sample size was planned to exceed N = 614, as in Study 1. Qualtrics was contracted to provide a sample of active Facebook users that was broadly representative of the US population in terms of gender; education level; and age profile (18+). Sampling quotas were used to assemble a sample comprising approximately one third each self-describing as left-wing, centre and right-wing in their political orientation. Sampling errors on the part of Qualtrics led to over-recruitment of individuals aged 65 years, who make up 94 of the 160 individuals in the 60–69 age group. As a consequence, the 60–69 age group is itself over-represented in this sample compared to the broader US population. Participant demographics are shown in Table 1, column 4. For the focal analysis in this study, the sample size conferred 92.6% power to detect R2 = .04 in a multiple regression with 15 predictors (2-tailed, alpha = .05).

Results

Descriptive statistics are summarised in Table 15. All scales had acceptable reliability. The main dependent variable, likelihood of sharing, again had a very skewed distribution with a strong floor effect.

thumbnail
Table 15. Descriptive statistics for participant characteristics and response variables, Study 4.

https://doi.org/10.1371/journal.pone.0239666.t015

To simultaneously test hypotheses 1–4 a multiple regression analysis was carried out using the expanded predictor set from Study 1. Given inclusion of gender as a predictor variable, the one respondent who did not report their gender as either male or female was excluded from further analysis. The analysis, summarised in Table 16, indicated that the model explained 56% of the variance in self-reported likelihood of sharing the three disinformation items. Neither the authoritativeness of the story source, consensus information associated with the stories, nor consistency of the items with participant attitudes (conservatism) was a statistically significant predictor. Extraversion positively predicted ratings of likelihood of sharing. In terms of demographic characteristics, age was a significant predictor, with younger people reporting a higher likelihood of sharing. Finally, people reported a greater likelihood of sharing the items if they believed they were likely to be true, and if they thought they had seen them before.

thumbnail
Table 16. Predictors of rated likelihood of sharing stimuli (extended predictor set), Study 4.

https://doi.org/10.1371/journal.pone.0239666.t016

Participants had also been asked about their historical sharing of untrue political stories, both unknowing and deliberate. Of the 638 participants, 185 (29.0%) indicated that they had ever ‘shared a political news story online that they later found out was made up’, while 132 out of 638 indicated they had shared one that they ‘thought AT THE TIME was made up’ (20.7%). Predictors of whether or not people had shared untrue material under both sets of circumstances were examined using logistic regressions, with the same sets of participant-level predictors.

Having unknowingly shared untrue material (Table 17) was significantly predicted by higher New Media Literacy, lower Conscientiousness, higher education, and higher levels of Facebook use. Having shared material known to be untrue at the time (Table 18) was significantly predicted by higher Extraversion, lower Agreeableness, younger age, and higher levels of Facebook use.

thumbnail
Table 17. Binary logistic regression: Predictors of whether participants had previously shared political stories they subsequently discovered were false, Study 4.

https://doi.org/10.1371/journal.pone.0239666.t017

thumbnail
Table 18. Binary logistic regression: Predictors of whether participants had previously shared political stories they knew at the time were false, Study 4.

https://doi.org/10.1371/journal.pone.0239666.t018

Discussion

Again, the pattern of results emerging from Study 4 had some similarities but also some differences from Studies 1–3. Once again, hypotheses 1, 2 and 4 were unsupported by the results. Similarly to Study 3, but unlike Studies 1 and 2, conservatism (the proxy for consistency) did not predict sharing the stories. Belief that the stories were true, and likelihood of having seen them before, were the strongest predictors. Higher levels of Extraversion (a new finding) and lower ages (as in Study 3) were associated with higher reported likelihood of sharing the stimuli.

For historical sharing, for the first time–and counterintuitively–new media literacy was associated with higher likelihood of having shared false material unknowingly. As in Studies 1 and 2, lower Conscientiousness was also important. Counterintuitively, higher education levels were associated with higher unintentional sharing, as were higher levels of Facebook use. For intentional sharing, higher Extraversion was a predictor, as was lower Agreeableness, younger age and higher levels of Facebook use.

General discussion

When interpreting the overall pattern of results from Studies 1–4, given the weakness of most of the associations, it is likely to be most useful to focus on relationships that are replicated across studies and disregard ‘one off’ findings. Tables 1921 provide a summary of the statistically significant predictors in each of the studies. It is clear that two variables consistently predicted self-rated likelihood of sharing disinformation exemplars: belief that the stories were likely to be true, and likely prior familiarity with the stories. It is also clear that three key variables did not: markers of authority, markers of consensus and digital literacy.

thumbnail
Table 19. Predictors of self-reported likelihood of sharing, retweeting or liking stimuli across all studies.

https://doi.org/10.1371/journal.pone.0239666.t019

thumbnail
Table 20. Predictors of un-knowingly sharing false political stories across all studies.

https://doi.org/10.1371/journal.pone.0239666.t020

thumbnail
Table 21. Predictors of knowingly sharing false political stories across all studies.

https://doi.org/10.1371/journal.pone.0239666.t021

Hypothesis 1 predicted that stories portrayed as coming from more authoritative sources were more likely to be shared. However, this was not observed in any of the four studies. One interpretation of this is that the manipulation failed. However, pilot work (see Study 1, Study 4) with comparable samples indicated that people did see the sources as differing in authoritativeness. The failure to find the predicted effect could also be due to use of simulated scenarios–though care was taken to ensure they resembled reality–or weaknesses in the methodology, such as the distributional properties of the dependent variables. However, consistent relationships between other predictors and the dependent variable were observed. Thus, the current studies provide no evidence that authoritativeness of a source influences sharing behaviour.

Hypothesis 2 predicted that stories portrayed as having a higher degree of consensus in audience reactions (i.e. high numbers of people had previously shared them) would be more likely to be shared. In fact, consensus markers had no effect on self-reported probability of sharing or liking the stories. Therefore, the current studies provide no evidence that indicators of ‘social proof’ influence participant reactions to the stimuli.

Hypothesis 3 was that people would be more likely to share materials consistent with their pre-existing beliefs. This was operationalised by measuring participants’ political orientation (overall level of conservatism) and using stimuli that were right-wing in their orientation. In Studies 1 and 2, more conservative people were more likely to share the materials. Further evidence for hypothesis 3 comes from the finding, across all studies, that level of belief the stories were “accurate and truthful” was the strongest predictor of likelihood of sharing. This is again in line with the consistency hypothesis: people are behaving in ways consistent with their beliefs. The finding from Study 3 that more conservative people were more likely to have historically shared material they knew to be untrue could also be in line with this hypothesis, given that a great many of the untrue political stories circulated online are conservative-oriented.

Hypothesis 4, that people lower in digital literacy would be more likely to engage with disinformation, was again not supported. As noted earlier, measurement of digital literacy is problematic. However, pilot work showed that the New Media Literacy Scale did differentiate between people with higher and lower levels of social media use in the expected manner, so it is likely to have a degree of validity. In Study 4, higher NMLS scores were associated with having unwittingly shared false material in the past, which is counterintuitive. However, this may be due to the fact that more digitally literate people should be more able to see that something was false in hindsight. Higher NMLS scores were also associated with deliberately sharing falsehoods in Study 3. This could be attributable to greater ease with which digitally literate individuals can do such things, if motivated to do so.

A number of other variables were included on an exploratory basis, or for the purpose of controlling for possible confounds. Of these, the most important was participants’ ratings of the likelihood that they had seen the stimuli before. This variable was originally included in the design so that any familiarity effects could be controlled for when evaluating the effect of other variables. In fact, rated likelihood of having seen the materials before was the second strongest predictor of likelihood of sharing it. It was a predictor in all four studies, and for the Facebook studies (1 and 4) it was the second most important variable. This is consistent with work on prior exposure to false material online, where prior exposure to fake news headlines increased participants’ ratings of their accuracy [44]. Furthermore, it has been found that prior exposure to fake-news headlines reduced participants’ ratings of how unethical it was to share or publish the material, even when it was clearly marked as false [43]. Thus, repeated exposure to false material may increase our likelihood of sharing it. It is known that repeated exposure to statements increases people’s subjective ratings of their truth [48]. However, there must be more going on here, because the regression analyses indicated that the familiarity effect was independent of the level of belief that it is true. When considering work that found that amplification of content by bot networks led to greater levels of human sharing [21], the implication is that repeated actual exposure to the materials is what prompts people to share it, not metrics of consensus such as the number of likes or shares displayed beside an article.

Of the five dimensions of personality measured, four (Agreeableness, Extraversion, Neuroticism and Conscientiousness were predictors of either current or historical sharing in one or more studies. Consistent with findings from [24], Studies 1 and 3 found that lower Agreeableness was associated with greater probability of sharing or liking the stories. It was also associated with accidental historical sharing in Study 1, and deliberate historical sharing in Studies 1 and 4. In contrast to this, past research on personality and social media behaviour indicates that more agreeable people are more likely to share information on social media: [49] reported that its role in this was mediated by trust, while [32] found that higher Agreeableness was associated with higher levels of social media use in general. Given those findings, it is likely that the current results are specific to disinformation stimuli rather than social sharing in general. Agreeableness could potentially interact with the source of the information: more agreeable people might conceivably be more eager to please those close to them, However, while it is possible that Agreeableness interacted in some way with the framing of the material having been shared by ‘a friend’ in Study 1, Study 3 had no such framing. More broadly, the nature of the stories may be important: disinformation items are normally critical or hostile in their nature. This may mean they are more likely to be shared by disagreeable people, who themselves may be critical in their outlook and not concerned about offending others. Furthermore, Agreeableness is associated with general trusting behaviour. It may be that disagreeable people are therefore more likely to endorse conspiracist material, or other items consistent with a lack of trust in politicians or other public figures.

Lower Conscientiousness was associated with accidental historical sharing of false political stories in Studies 1, 3 and 4. This is unsurprising, as less conscientious people would be less likely to check the veracity of a story before sharing it. The lack of an association with deliberate historical sharing reinforces this view.

Higher Extraversion was associated with probability of sharing in Study 4, with accidental historical sharing in Study 2 and 3, and with deliberate historical sharing in Study 4. Higher Neuroticism was associated with historical deliberate sharing in Studies 2 and 3. All these relationships may simply reflect a higher tendency on the part of extraverted and neurotic individuals to use social media more [32].

There are clearly some links between personality and sharing of disinformation. However, the relationships are weak and inconsistent across studies. It is possible that different traits affect different behaviours: for example low Conscientiousness is associated with accidental but not deliberate sharing, while high Neuroticism is associated with deliberate but not accidental sharing. Thus, links between some personality traits and the spread of disinformation may be context- and motivation- specific, rather than reflecting blanket associations. However, lower Agreeableness–and to a lesser extent higher Extraversion–may predict an overall tendency to spread this kind of material.

Demographic variables were also measured and included in the analyses. Younger individuals rated themselves as more likely to engage with the disinformation stimuli in Studies 3 and 4, and were more likely to have shared untrue political stories in the past either accidentally (Study 1) or deliberately (Studies 1 and 4). This runs counter to findings that older adults were much more likely to have spread material from ‘fake news’ domains [16]. It is possible that the current findings simply reflect a tendency of younger people to be more active on social media.

People with lower levels of education reported a greater likelihood of sharing the disinformation stories in Studies 1 and 2. Counterintuitively, more educated people were more likely to have accidentally shared false material in the past (Study 4). One possible explanation is that more educated people are more likely to have realised that they had done this, so the effect in Study 4 reflects an influence on reporting of the behaviour rather than on the behaviour itself.

In each of Studies 1, 2 and 3, men reported a greater likelihood of sharing or liking the stimuli. Men were also more likely to have shared false material in the past unintentionally (Study 3) or deliberately (Study 2). Given its replicability, this would seem to be a genuine relationship, but one which is not easy to explain.

Finally, the level of use of particular platforms (Facebook, Twitter or Instagram) did not predict likelihood of sharing the stimuli in any study. Level of use of Twitter (Study 2) predicted accidental sharing of falsehoods, while Facebook use predicted both accidental and deliberate sharing (Study 4). For historical sharing, this may be attributable to a volume effect: the more you use the platforms, the more likely you are to do these things. It should be noted that the level of use metric lacked granularity and had a strong ceiling effect, with most people reporting the highest use level in each case.

In all four studies, a minority of respondents indicated that they had previously shared political disinformation they had encountered online, either by mistake or deliberately. The proportion who had done each varied across the four studies, likely as a function of the population sampled (13.1%-29.0% accidentally; 6.2%-20.7% deliberately), but the figures are a similar magnitude to those reported elsewhere [31,16]. Even if the proportion of social media users who deliberately share false information is just 6.2%, the lowest figure found here, then that is still a very large number of people who are actively and knowingly spreading untruths.

The current results indicate that a number of variables predict onward sharing of disinformation. However, most of these relationships are very small. It has been argued that the minimum effect size for a predictor that would have real-world importance in social science data is β = .2 [42]. Considering the effect sizes for the predictors in Tables 4, 8, 12 and 17, only belief that the stories are true exceeds this benchmark in every study, while probability of having seen the stories before exceeded it in Studies 1 and 4. None of the other relationships reported exceeded the threshold. This has implications for the practical importance of these findings, in terms of informing interventions to counteract disinformation.

Practical implications

Some of the key conclusions in this set of studies arise from the failure to find evidence supporting an effect. Proceeding from such findings to a firm conclusion is a logically dangerous endeavour: absence of evidence is not, of course, evidence of absence. However, given the evidence from pilot studies that the manipulations were appropriate; the associations of the dependent measures with other variables; and the high levels of power to detect the specified effects, it is possible to say with some confidence that hypotheses 1, 2 and 4 are not supported by the current data. This means that the current project does not provide any evidence that interventions based on these would be of value.

This is particularly important for the findings around digital literacy. Raising digital media literacy is a common and appealing policy position for bodies concerned with disinformation (e.g. [1]). There is evidence from a number of trials that it can be effective in the populations studied. However, no support was found here for the idea that digital literacy has a role to play in the spread of disinformation. This could potentially be attributed to the methodology in this study. However, some participants– 288 in total across all four studies–reported sharing false political stories that they knew at the time were made up. It is hard to see how raising digital literacy would reduce such deliberate deception. Trying to raise digital literacy across the population is therefore unlikely to ever be a complete solution.

There is evidence that consistency with pre-existing beliefs can be an important factor, especially in relation to beliefs that disinformation stories are accurate and truthful. This implies that interventions are likely to be most effective when targeted at individuals who already hold an opinion or belief, rather than trying to change people’s minds. While this would be more useful to those seeking to spread disinformation, it could also give insights into populations worth targeting with countermessages. Targeting on other variables–personality or demographic–is unlikely to be of value given the low effect sizes. While these variables (perhaps gender and Agreeableness in particular) most likely do play a role, their relative importance seems so low that the information is unlikely to be useful in practice.

Alongside other recent work [43,44], the current findings suggest that repeated exposure to disinformation materials may increase our likelihood of sharing it, even if we don’t believe it. The practical implication would be that to get a message repeated online, one should repeat it many times (there is a clear parallel with the ‘repeat the lie often enough’ maxim regarding propaganda). Social proof (markers of consensus) seems unimportant based on current findings, so there is no point in trying to manipulate the numbers next to a post as sometimes done in online marketing. What might be more effective is to have the message posted many times (e.g. by bots) so that people had a greater chance of coming across it repeatedly. This would be true both for disinformation and counter-messages.

Limitations

As a scenario-based study, the current work has a number of limitations. While it is ethically preferable to field experiments, it suffers from reduced ecological validity and reliance on self-reports rather than genuine behaviour. Questions could be asked, for example, about whether the authoritativeness and consensus manipulations were sufficiently salient to participants (even though they closely mirrored the presentation of this information in real-life settings). Beyond this, questions might be raised about the use of self-reported likelihood of sharing: does sharing intention reflect real sharing behaviour? In fact, there is evidence to suggest that it does, with recent work finding that self-reported willingness to share news headlines on social media paralleled the actual level of sharing of those materials on Twitter [50].

The scenarios presented were all selected to be right-wing in their orientation, whereas participants spanned the full range from left to right in their political attitudes. This means that consistency was only evaluated with respect to one pole of the right-left dimension. There are a number of other dimensions that have been used as wedge issues in real-world information operations: for example, support for the Black Lives Matter movement; climate change; or for or against Britain leaving the European Union. The current research only evaluated consistency between attitudes and a single issue. A better test of the consistency hypothesis would be to extend that to evaluation of consistency between attitudes and some of those other issues.

A key issue is the distributions of the main outcome variables, which were heavily skewed with strong floor effects. While they still had sufficient sensitivity to make the regression analyses meaningful, they also meant that any effects found were likely to be attenuated. It may thus be that the current findings underestimate the strength of some of the associations reported.

Another measurement issue is around the index of social media use (Facebook, Twitter, Instagram). As Table 1 shows, in three of the studies over 60% of respondents fall into the highest use category. Again, this weakens the sensitivity of evaluations of these variables as predictors of sharing disinformation.

In order to identify variables associated with sharing disinformation, this research programme took the approach of presenting individuals with examples of disinformation, then testing which of the measured variables was associated with self-reported likelihood of sharing. A shortcoming of this approach is that it does not permit us to evaluate whether the same variables are associated with sharing true information. An alternative design would be to show participants either true or false information, and examine whether the same constructs predict sharing both. This would enable identification of variables differentially impacting the sharing of disinformation but not true information. Complexity arises, however, from the fact that whether a story can be considered disinformation, misinformation, or true information, depends on the observer’s perspective. False material deliberately placed online would be categorized as disinformation. A social media user sharing it in full knowledge that it was untrue would be sharing disinformation. However, if they shared it believing it was actually true, then from an observer’s perspective this would be technically categorised as misinformation (defined as “the inadvertent sharing of false information” [1, p.10]). In fact, from the user’s perspective, it would be true information (because they believe it) even though an omniscient observer would know it was actually false. This points to the importance of further research into user motivations for sharing, which are likely to differ depending on whether or not they believe the material is true.

In three of the four studies (Studies 1,2, 4), the stimulus material was introduced as having been posted by a friend who wanted them to share it. This is likely to have boosted the rates of self-reported likelihood of sharing in those studies. Previous work has shown that people rate themselves as more likely to engage with potential disinformation stories posted by a friend, as opposed to a more distant acquaintance [24]. To be clear, this does not compromise the testing of hypotheses in those studies (given that the framing was the same for all participants, in all conditions). It is also a realistic representation of how we may encounter material like this in our social media feeds. However, it does introduce an additional difference between Studies 1, 2 and 4 when compared with Study 3. It would be desirable for further work to check whether the same effects were found when messages were framed as having been posted by people other than friends.

Finally, the time spent reading and reacting to the disinformation stimuli was not measured. It is possible that faster response times would be indicative of more use of heuristics rather than considered thought about the issues. This could profitably be examined, potentially in observational or simulation studies rather than using self-report methodology.

Future work

A number of priorities for future research arise from the current work. First, it is desirable to confirm these findings using real-world behavioural measures rather than simulations. While it is not ethically acceptable to run experimental studies posting false information on social media, it would be possible to do real-world observational work. For example, one could measure digital literacy in a sample of respondents, then do analyses of their past social media sharing behaviour.

Another priority revolves around those individuals who knowingly share false information. Why do they do this? Without understanding the motivations of this group, any interventions aimed at reducing the behaviour are unlikely to be successful. As well as being of academic interest, motivation for sharing false material has been flagged as a gap in our knowledge by key stakeholders [7].

The current work found that men were more likely to spread disinformation than women. At present, it is not clear why this was the case. Are there gender-linked individual differences that influence the behaviour? Could it be that the subject matter of disinformation stories is stereotypically more interesting to men, or that men think their social networks are more likely to be interested in or sympathetic to them?

While the focus in this paper has been on factors influencing the spread of untruths, it should be remembered that ‘fake news’ is only one element in online information operations. Other tactics and phenomena, such as selective or out-of-context presentation of true information, political memes, and deliberately polarising hyperpartisan communication, are also prevalent. Work is required to establish whether the findings of this project related to disinformation, also apply to those other forms of computational propaganda. Related to this, it would be of value to establish whether the factors found here to influence sharing of untrue information, also influence the sharing of true information. This would indicate whether there is anything different about disinformation, and also point to factors that might influence sharing of true information that is selectively presented in information operations.

Conclusion

The current work allows some conclusions to be drawn about the kind of people who are likely to further spread disinformation material they encounter on social media. Typically, these will be people who think the material is likely to be true, or have beliefs consistent with it. They are likely to have previous familiarity with the materials. They are likely to be younger, male, and less educated. With respect to personality, it is possible that they will tend to be lower in Agreeableness and Conscientiousness, and higher in Extraversion and Neuroticism. With the exception of consistency and prior exposure, all of these effects are weak and may be inconsistent across different populations, platforms, and behaviours (deliberate v. innocuous sharing). The current findings do not suggest they are likely to be influenced by the source of the material they encounter, or indicators of how many other people have previously engaged with it. No evidence was found that level of literacy regarding new digital media makes much difference to their behaviour. These findings have implications for how governments and other bodies should go about tackling the problem of disinformation in social media.

References

  1. 1. House of Commons Digital, Culture, Media and Sport Committee. Disinformation and ‘fake news’: Final Report. 2019 Feb 2 [cited 18 Feb 2019]. Available from: https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf.
  2. 2. Bradshaw S, Howard PN. The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation. 2019 [cited 26 September 2019]. Available from: https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf.
  3. 3. Krasodomski-Jones A, Judson E, Smith J, Miller C, Jones E. Warring Songs: Information operations in the digital age. 2019 May [cited 21 May 2019]. Available from: https://demos.co.uk/wp-content/uploads/2019/05/Warring-Songs-final-1.pdf.
  4. 4. Howard PN, Ganash B, Liotsiou D, Kell J, François C. The IRA, Social Media and Political Polarization in the United States, 2012–2018. Working Paper 2018.2. 2018 [cited 20 December 2019]. Available from https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/12/IRA-Report-2018.pdf.
  5. 5. Zerback T, Töpfl F, Knöpfle M. The disconcerting potential of online disinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them. New Media & Society. 2020. Available from: https://doi.org/10.1177/1461444820908530.
  6. 6. Parveen N, Waterson J. UK phone masts attacked amid 5G-coronavirus conspiracy theory. The Guardian. 2020 April 4 [cited 2020 July 17]. Available from: https://www.theguardian.com/uk-news/2020/apr/04/uk-phone-masts-attacked-amid-5g-coronavirus-conspiracy-theory.
  7. 7. House of Commons Digital, Culture, Media and Sport Committee. Misinformation in the COVID-19 Infodemic. 2020 July 21 [cited 21 July 2020]. Available from: https://committees.parliament.uk/publications/1954/documents/19089/default/.
  8. 8. Bradshaw S, Neudert L-M, Howard PN. Government responses to malicious use of social media. 2018 [cited 17 January 2019]. Available from: https://www.stratcomcoe.org/government-responses-malicious-use-social-media.
  9. 9. Facebook. What’s the difference between organic, paid and post reach? 2019 [cited 31 July 2019]. Available from: https://www.facebook.com/help/285625061456389.
  10. 10. Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018;359:1146–1151. pmid:29590045
  11. 11. Petty RE, Cacioppo JT. The Elaboration Likelihood Model of Persuasion. In: Berkowitz L, editor. Advances in Experimental Social Psychology Volume 19. Academic Press; 1986. p. 123–205.
  12. 12. Hayes RA, Carr CT, Wohn DY. One Click, Many Meanings: Interpreting Paralinguistic Digital Affordances in Social Media. Journal of Broadcasting & Electronic Media. 2016;60:171–187.
  13. 13. Williams EJ, Beardmore A, Joinson AN. Individual differences in susceptibility to online influence: A theoretical review. Computers in Human Behavior. 2017;72: 412–421.
  14. 14. Cook J, Lewandowsky S, Ecker UKH. Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS One. 2017;12: e0175799. pmid:28475576
  15. 15. Cialdini RB. Influence: The Psychology of Persuasion. New York: HarperCollins; 2009.
  16. 16. Guess A, Nagler J, Tucker J. Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances. 2019;5: eaau4586. pmid:30662946
  17. 17. Festinger L. A theory of cognitive dissonance. Stanford University Press; 1957.
  18. 18. Moravec PL, Minas RK, Dennis AR. Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense At All. MIS Quarterly. 2019;43: 13430–11360.
  19. 19. Roethke K, Klumpe J, Adam M, Benlian A. Social influence tactics in e-commerce onboarding: The role of social proof and reciprocity in affecting user registrations. Decision Support Systems. 2020.
  20. 20. Innes M, Dobreva D, Innes H. Disinformation and digital influencing after terrorism: spoofing, truthing and social proofing. Contemporary Social Science. 2019.
  21. 21. Shao C, Ciampaglia GL, Varol O, Yang KC, Flammini A, Menczer F. The spread of low-credibility content by social bots. Nature Communications. 2018;9: 4787. pmid:30459415
  22. 22. Bay S, Fredheim R. How Social Media Companies are Failing to Combat Inauthentic Behaviour Online. 2019 [cited 21 February 2020]. Available from: https://stratcomcoe.org/how-social-media-companies-are-failing-combat-inauthentic-behaviour-online.
  23. 23. Lin X, Spence PR, Lachlan KA. Social media and credibility indicators: The effect of influence cues. Computers in Human Behavior. 2016;63: 264–271.
  24. 24. Buchanan T, Benson V. Spreading Disinformation on Facebook: Do Trust in Message Source, Risk Propensity, or Personality Affect the Organic Reach of “Fake News”. Social Media + Society. 2019;5: 1–9.
  25. 25. Mak T, Berry L. Russian Influence Campaign Sought To Exploit Americans’ Trust In Local News. 2018 July 12 [cited 8 August 2018]. Available from: https://www.npr.org/2018/07/12/628085238/russian-influence-campaign-sought-to-exploit-americans-trust-in-local-news.
  26. 26. Mitchell A, Gottfried J, Barthel M, Shearer E. The Modern News Consumer: News attitudes and practices in the digital era. 2016 [cited 13 November 2018]. Available from: http://www.journalism.org/2016/07/07/the-modern-news-consumer/.
  27. 27. Zuiderveen Borgesius FJ, Möller J, Kruikemeier S et al. Online Political Microtargeting: Promises and Threats for Democracy. Utrecht Law Review. 2018;14: 82.
  28. 28. Chetty K, Qigui L, Gcora N, Josie J, Wenwei L, Fang C. Bridging the digital divide: measuring digital literacy. Economics: The Open-Access, Open-Assessment E-Journal. 2018;12: 1–20.
  29. 29. Koc M, Barut E. Development and validation of New Media Literacy Scale (NMLS) for university students. Computers in Human Behavior. 2016;63: 834–843.
  30. 30. Vishwanath A, Herath T, Chen R, Wang J, Rao HR. Why do people get phished? Testing individual differences in phishing vulnerability within an integrated, information processing model. Decision Support Systems. 2011;51: 576–586.
  31. 31. Barthel M, Mitchell A, Holcomb J. Many Americans Believe Fake News is Sowing Confusion. 2016 Dec [cited 15 March 2018]. Available from: http://assets.pewresearch.org/wp-content/uploads/sites/13/2016/12/14154753/PJ_2016.12.15_fake-news_FINAL.pdf.
  32. 32. Gil de Zúñiga H, Diehl T, Huber B, Liu J. Personality Traits and Social Media Use in 20 Countries: How Personality Relates to Frequency of Social Media Use, Social Media News Use, and Social Media Use for Social Interaction. Cyberpsychology, Behavior, and Social Networking. 2017;20: 540–552.
  33. 33. Azucar D, Marengo D, Settanni M. Predicting the Big 5 personality traits from digital footprints on social media: A meta-analysis. Personality and Individual Differences. 2018;124: 150–159.
  34. 34. Hinds J, Joinson A. Human and Computer Personality Prediction From Digital Footprints. Current Directions in Psychological Science. 2019;28: 204–211.
  35. 35. Matz SC, Kosinski M, Nave G, Stillwell DJ. Psychological targeting as an effective approach to digital mass persuasion. Proceedings of the National Academy of Sciences of the United States of America. 2017;114: 12714–12719. pmid:29133409
  36. 36. Hall JA, Pennington N, Lueders A. Impression management and formation on Facebook: A lens model approach. New Media & Society. 2013;16: 958–982.
  37. 37. Goldberg LR. A broad-bandwidth, public domain, personality inventory measuring the lower-level facets of several five-factor models. In: Mervielde I., Deary I.J., De Fruyt F., FO, editors. Personality Psychology in Europe Vol. 7. Tilburg, The Netherlands: Tilburg University Press; 1999. p. 7–28.
  38. 38. Buchanan T, Johnson JA, Goldberg LR. Implementing a Five-Factor Personality Inventory for Use on the Internet. European Journal of Psychological Assessment. 2005;21: 115–127.
  39. 39. Costa PT, McCrae RR. Revised NEO Personality Inventory (NEO PI-R) and NEO Five-Factor Inventory (NEO FFI): Professional Manual. Odessa, FL: Psychological Assessment Resources; 1992.
  40. 40. Everett JA. The 12 item Social and Economic Conservatism Scale (SECS). PLoS One. 2013;8: e82131. pmid:24349200
  41. 41. Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D. Fake news on Twitter during the 2016 U.S. presidential election. Science. 2019;363: 374–378. pmid:30679368
  42. 42. Ferguson CJ. An effect size primer: A guide for clinicians and researchers. Professional Psychology: Research and Practice. 2009;40: 532–538.
  43. 43. Effron DA, Raj M. Misinformation and Morality: Encountering Fake-News Headlines Makes Them Seem Less Unethical to Publish and Share. Psychological Science. 2020;31: 75–87. pmid:31751517
  44. 44. Pennycook G, Cannon TD, Rand DG. Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General. 2018;147: 1865–1880.
  45. 45. Liberini F, Redoano M, Russo A, Cuevas A, Cuevas R. Politics in the Facebook Era. Evidence from the 2016 US Presidential Elections. CAGE Working Paper Series (389). 2018 [cited 17 Dec 2019]. Available from: https://warwick.ac.uk/fac/soc/economics/research/centres/cage/manage/publications/389-2018_redoano.pdf.
  46. 46. Meehl PE. Why summaries of research on psychological theories are often uninterpretable. Psychological Reports. 1990;66: 195–244.
  47. 47. DiResta R, Shaffer K, Ruppel B et al. The tactics & tropes of the Internet Research Agency. 2018 [cited 12 Dec 2018]. Available from: https://digitalcommons.unl.edu/cgi/viewcontent.cgi?article=1003&context=senatedocs.
  48. 48. Dechêne A, Stahl C, Hansen J, Wänke M. The truth about the truth: a meta-analytic review of the truth effect. Personality and Social Psychology Review. 2020;14: 238–257.
  49. 49. Deng S, Lin Y, Liu Y, Chen X, Li H. How Do Personality Traits Shape Information-Sharing Behaviour in Social Media? Exploring the Mediating Effect of Generalized Trust. Information Research: An International Electronic Journal. 2017;22. Available from: http://informationr.net/ir/22-3/paper763.html.
  50. 50. Mosleh M, Pennycook G, Rand DG. Self-reported willingness to share political news articles in online surveys correlates with actual sharing on Twitter. PloS One. 2020;15: e0228882. pmid:32040539