Skip to main content
main-content
Top

Hint

Swipe to navigate through the articles of this issue

Published in: Political Behavior 3/2020

21-01-2019 | Original Paper

Taking Fact-Checks Literally But Not Seriously? The Effects of Journalistic Fact-Checking on Factual Beliefs and Candidate Favorability

Authors: Brendan Nyhan, Ethan Porter, Jason Reifler, Thomas J. Wood

Published in: Political Behavior | Issue 3/2020

Login to get access
share
SHARE

Abstract

Are citizens willing to accept journalistic fact-checks of misleading claims from candidates they support and to update their attitudes about those candidates? Previous studies have reached conflicting conclusions about the effects of exposure to counter-attitudinal information. As fact-checking has become more prominent, it is therefore worth examining how respondents respond to fact-checks of politicians—a question with important implications for understanding the effects of this journalistic format on elections. We present results to two experiments conducted during the 2016 campaign that test the effects of exposure to realistic journalistic fact-checks of claims made by Donald Trump during his convention speech and a general election debate. These messages improved the accuracy of respondents’ factual beliefs, even among his supporters, but had no measurable effect on attitudes toward Trump. These results suggest that journalistic fact-checks can reduce misperceptions but often have minimal effects on candidate evaluations or vote choice.
Appendix
Available only for authorised users
Footnotes
1
Wintersieck (2017) is a notable exception. There are crucial differences between our study and hers, however. First, whereas Wintersieck looks at candidates deemed “honest” by fact-checkers, our studies examine statements flagged by fact-checkers for being false. Second, while Wintersieck focuses on a statewide election and recruits student subjects at a university, we enroll broader pools of participants in two experiments about a national election. This sampling distinction is particularly relevant here given that students might be more disposed to engage in the effortful cognition required to counterargue unwelcome information such as fact-checks (Krupnikov and Levine 2014).
 
2
Our preregistration for Study 1 documents our hypotheses and analysis plan (http://​www.​egap.​org/​registration/​2194). Unless otherwise noted, all Study 1 analyses are consistent with this document. Study 2 was conducted too rapidly to be preregistered (it was fielded immediately after the debate) but our analysis follows Study 1 to the greatest extent possible.
 
3
As discussed above, previous findings are mixed on both hypotheses. For H1, see Nyhan and Reifler (2010) (backfire on two of five studies) versus Wood and Porter (2018) (no cases of backfire). For H2, compare Wood and Porter (2018), which finds a consistent pattern of ideological differentials in belief updating, with Nyhan and Reifler (N.d.), which finds no evidence of differential acceptance when fact-checks are pro-attitudinal.
 
4
Findings for two other preregistered research questions are described below and in the online appendix.
 
5
As we describe below, we also seek to maximize the realism of the treatments we use to test the effects of elite messages denigrating a fact-check. Study 1 tests the effects of exposure to actual statements made by Paul Manafort, Trump’s campaign chairman at the time, challenging the fact-checking of Trump’s convention speech.
 
6
It is important to note that journalistic fact-checks do not always logically contradict a speaker (e.g., Marietta et al. 2015; Uscinski and Butler 2013). Fact-checkers often seek instead to address possible inferences that listeners might draw from a candidate’s statement. For instance, Trump’s nomination speech described an “epidemic” of violent crime. He did not directly state that crime has increased, but a listener might infer as much (indeed, Trump made clear statements about increasing crime rates at other times). Like other journalistic fact-checks, our treatment thus cites FBI data on the long-term decline in violent crime. Similarly, Trump’s debate statement emphasized factory jobs leaving Ohio and Michigan. While he did not directly say that employment in Michigan and Ohio is suffering because of trade policy, he implied that widespread job loss was taking place. Consequently, our fact-check, like several in the media, provided data on changes in jobs and unemployment in those states.
 
7
The full instrument is in Online Appendix A.
 
8
Per our preregistration, respondents who indicated crime was up due to inequality or unemployment were coded as -1 (liberal), those who said crime was up due to moral decline or down due to tougher policing were coded as 1 (conservative), and other responses were coded as 0.
 
9
Demographic and balance data for both samples are provided in Online Appendix C.
 
10
All analyses in this section are consistent with our preregistration unless otherwise indicated. OLS models are replicated using ordered probit where applicable in Online Appendix C.
 
11
Mean scores on two attention checks were 1.62 and 1.92 for controls and 1.59 and 1.87 for the treatment groups on Morning Consult and Mechanical Turk, respectively. (See Online Appendix A for wording.) We therefore deviate slightly from our preregistration to omit consideration of response time as a measure of attention.
 
12
We report equivalent but more complex models estimated on the full sample in Online Appendix C.
 
13
These quantities are estimated with respect to the control condition. These differences are not significant relative to the uncorrected condition.
 
14
Findings are similar for perceived article bias (see Online Appendix C).
 
15
In Table C19 in Online Appendix C, we show that the manipulation had no effect on favorability toward Clinton or Barack Obama either.
 
16
Because the broader experiment in which Study 2 was embedded was designed to examine how post-debate news coverage affected debate perceptions, participants were assigned to one of five content consumption conditions that were orthogonal to the randomization we examine here (C-SPAN with no post-debate coverage, Fox News with or without post-debate coverage, or MSNBC with or without post-debate coverage). We excluded subjects who did not have access to cable and block-randomized by party and preferred cable channel. For additional details, consult (Gross et al. 2016).
 
17
The instrument was prepared before transcripts were available, so the statement in our study differs slightly from the official transcript.
 
18
See Online Appendix C for details on participant demographics and experimental balance. Though we cannot rule out the possibility of post-treatment bias (Montgomery et al. 2018), we find no significant effect of treatment assignment at wave 2 on wave 3 participation in a simple OLS model (\(\beta = 0.05\), \(p>.10\)).
 
19
Such fact-checks are common. For instance, more than 60% of the claims rated by PolitiFact and the Washington Post Fact Checker were found to be mostly or totally false by both fact-checkers (Lim 2018). Moreover, fact-checkers consider it part of their mission to check claims against official data sources and frequently do so. Graves (2016, p. 85) writes, for instance, that “Fact-checkers always seek official data and often point to examples like this [a fact-check assessing claims about government spending and job growth using federal data] to explain what they do.”
 
20
The design does not include a control condition or fact-check denial and denial/source derogation conditions. The omitted category is an uncorrected statement.
 
Literature
go back to reference Bolsen, T., Druckman, J. N., & Cook, F. L. (2014). The influence of partisan motivated reasoning on public opinion. Political Behavior, 36(2), 235–262. Bolsen, T., Druckman, J. N., & Cook, F. L. (2014). The influence of partisan motivated reasoning on public opinion. Political Behavior, 36(2), 235–262.
go back to reference Chan, M. P. S., Jones, C. R., Jamieson, K. H., & Albarracín, D. (2017) Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological science, 28(11), 1531–1546 Chan, M. P. S., Jones, C. R., Jamieson, K. H., & Albarracín, D. (2017) Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological science, 28(11), 1531–1546
go back to reference Flynn, D. J. (2016). The scope and correlates of political misperceptions in the mass public. Unpublished paper, Dartmouth College. Flynn, D. J. (2016). The scope and correlates of political misperceptions in the mass public. Unpublished paper, Dartmouth College.
go back to reference Funk, C. L. (1999). Bringing the candidate into models of candidate evaluation. The Journal of Politics, 61(3), 700–720. Funk, C. L. (1999). Bringing the candidate into models of candidate evaluation. The Journal of Politics, 61(3), 700–720.
go back to reference Gaines, B. J., Kuklinski, J. H., Quirk, P. J., Peyton, B., & Verkuilen, J. (2007). Same facts, different interpretations: Partisan motivation and opinion on Iraq. Journal of Politics, 69(4), 957–974. Gaines, B. J., Kuklinski, J. H., Quirk, P. J., Peyton, B., & Verkuilen, J. (2007). Same facts, different interpretations: Partisan motivation and opinion on Iraq. Journal of Politics, 69(4), 957–974.
go back to reference Garrett, R. K., Nisbet, E. C., & Lynch, E. K. (2013). Undermining the corrective effects of media-based political fact checking? The role of contextual cues and naïve theory. Journal of Communication, 63(4), 617–637. Garrett, R. K., Nisbet, E. C., & Lynch, E. K. (2013). Undermining the corrective effects of media-based political fact checking? The role of contextual cues and naïve theory. Journal of Communication, 63(4), 617–637.
go back to reference Graves, L. (2016). Deciding what’s true: The rise of political fact-checking in American journalism. New York: Columbia University Press. Graves, L. (2016). Deciding what’s true: The rise of political fact-checking in American journalism. New York: Columbia University Press.
go back to reference Hill, S. J. (2017). Learning together slowly: Bayesian learning about political facts. The Journal of Politics, 79(4), 1403–1418. Hill, S. J. (2017). Learning together slowly: Bayesian learning about political facts. The Journal of Politics, 79(4), 1403–1418.
go back to reference Hochschild, J. L., & Einstein, K. L. (2015). Do facts matter? Information and misinformation in American politics. Norman, OK: University of Oklahoma Press. Hochschild, J. L., & Einstein, K. L. (2015). Do facts matter? Information and misinformation in American politics. Norman, OK: University of Oklahoma Press.
go back to reference Jamieson, K. H. (2015). Implications of the demise of ‘Fact’ in political discourse. Proceedings of the American Philosophical Society, 159(1), 66–84. Jamieson, K. H. (2015). Implications of the demise of ‘Fact’ in political discourse. Proceedings of the American Philosophical Society, 159(1), 66–84.
go back to reference Jarman, J. W. (2016). Motivated to ignore the facts: The inability of fact-checking to promote truth in the public sphere. In J. Hannan (Ed.), Truth in the public sphere. London: Lexington Books. Jarman, J. W. (2016). Motivated to ignore the facts: The inability of fact-checking to promote truth in the public sphere. In J. Hannan (Ed.), Truth in the public sphere. London: Lexington Books.
go back to reference Khanna, K., & Sood, G. (2018). Motivated responding in studies of factual learning. Political Behavior, 40(1), 79–101. Khanna, K., & Sood, G. (2018). Motivated responding in studies of factual learning. Political Behavior, 40(1), 79–101.
go back to reference Krupnikov, Y., & Levine, A. S. (2014). Cross-sample comparisons and external validity. Journal of Experimental Political Science, 1(1), 59–80. Krupnikov, Y., & Levine, A. S. (2014). Cross-sample comparisons and external validity. Journal of Experimental Political Science, 1(1), 59–80.
go back to reference Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.
go back to reference Lenz, G. S. (2012). Follow the leader? How voters respond to politicians’ performance and policies. Chicago, IL: University of Chicago Press. Lenz, G. S. (2012). Follow the leader? How voters respond to politicians’ performance and policies. Chicago, IL: University of Chicago Press.
go back to reference Lim, C. (2018). Checking how fact-checkers check. Research & Politics, 5(3), 2053168018786848. Lim, C. (2018). Checking how fact-checkers check. Research & Politics, 5(3), 2053168018786848.
go back to reference Marietta, M., Barker, D. C., & Bowser, T. (2015). Fact-checking polarized politics: Does the fact-check industry provide consistent guidance on disputed realities? The Forum: A Journal of Applied Research in Contemporary Politics, 13(4), 577–596. Marietta, M., Barker, D. C., & Bowser, T. (2015). Fact-checking polarized politics: Does the fact-check industry provide consistent guidance on disputed realities? The Forum: A Journal of Applied Research in Contemporary Politics, 13(4), 577–596.
go back to reference Molden, D. C., & Higgins, E. T. (2005). Motivated thinking. In K. J. Holyoak & R. G. Morrison (Eds.), The Cambridge & handbook of thinking and reasoning. Cambridge: Cambridge University Press. Molden, D. C., & Higgins, E. T. (2005). Motivated thinking. In K. J. Holyoak & R. G. Morrison (Eds.), The Cambridge & handbook of thinking and reasoning. Cambridge: Cambridge University Press.
go back to reference Montgomery, J. M., Nyhan, B., & Torres, M. (2018). How conditioning on posttreatment variables can ruin your experiment and what to do about it. American Journal of Political Science, 62(3), 760–775. Montgomery, J. M., Nyhan, B., & Torres, M. (2018). How conditioning on posttreatment variables can ruin your experiment and what to do about it. American Journal of Political Science, 62(3), 760–775.
go back to reference Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330. Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330.
go back to reference Nyhan, B., & Reifler, J. (2015). The effect of fact-checking on elites: A field experiment on US state legislators. American Journal of Political Science, 59(3), 628–640. Nyhan, B., & Reifler, J. (2015). The effect of fact-checking on elites: A field experiment on US state legislators. American Journal of Political Science, 59(3), 628–640.
go back to reference Nyhan, B., Reifler, J., & Ubel, P. A. (2013). The hazards of correcting myths about health care reform. Medical Care, 51(2), 127–132. Nyhan, B., Reifler, J., & Ubel, P. A. (2013). The hazards of correcting myths about health care reform. Medical Care, 51(2), 127–132.
go back to reference Pierce, P. A. (1993). Political sophistication and the use of candidate traits in candidate evaluation. Pierce, P. A. (1993). Political sophistication and the use of candidate traits in candidate evaluation.
go back to reference Porter, E., Wood, T. J., & Kirby, D. (2018). Sex trafficking, Russian infiltration, birth certificates, and pedophilia: A survey experiment correcting fake news. Journal of Experimental Political Science, 2(5), 304–331. Porter, E., Wood, T. J., & Kirby, D. (2018). Sex trafficking, Russian infiltration, birth certificates, and pedophilia: A survey experiment correcting fake news. Journal of Experimental Political Science, 2(5), 304–331.
go back to reference Rahn, W. M., Aldrich, J. H., Borgida, E., & Sullivan, J. L. (1990). A social cognitive model of candidate appraisal. In J. A. Ferejohn & J. H. Kuklinski (Eds.), Information and democratic processes. Champaign: University of Illinois Press. Rahn, W. M., Aldrich, J. H., Borgida, E., & Sullivan, J. L. (1990). A social cognitive model of candidate appraisal. In J. A. Ferejohn & J. H. Kuklinski (Eds.), Information and democratic processes. Champaign: University of Illinois Press.
go back to reference Spivak, C. (2011). The fact-checking explosion. American Journalism Review, 32, 38–43. Spivak, C. (2011). The fact-checking explosion. American Journalism Review, 32, 38–43.
go back to reference Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769. Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769.
go back to reference Uscinski, J., & Butler, R. (2013). The epistemology of fact checking. Critical Review, 25(2), 162–180. Uscinski, J., & Butler, R. (2013). The epistemology of fact checking. Critical Review, 25(2), 162–180.
go back to reference Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. Journal of Communication, 65(4), 699–719. Weeks, B. E. (2015). Emotions, partisanship, and misperceptions: How anger and anxiety moderate the effect of partisan bias on susceptibility to political misinformation. Journal of Communication, 65(4), 699–719.
go back to reference Wintersieck, A. L. (2017). Debating the truth: The impact of fact-checking during electoral debates. American Politics Research, 45(2), 304–331. Wintersieck, A. L. (2017). Debating the truth: The impact of fact-checking during electoral debates. American Politics Research, 45(2), 304–331.
go back to reference Wood, T., & Porter, E. (2018). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behavior. Wood, T., & Porter, E. (2018). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behavior.
go back to reference Young, D., Shannon, J. K. H. P., & Goldring, A. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28(11), 1531–1546. Young, D., Shannon, J. K. H. P., & Goldring, A. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28(11), 1531–1546.
go back to reference Zaller, J. (1992). The nature and origins of mass opinion. Cambridge: Cambridge University Press. Zaller, J. (1992). The nature and origins of mass opinion. Cambridge: Cambridge University Press.
Metadata
Title
Taking Fact-Checks Literally But Not Seriously? The Effects of Journalistic Fact-Checking on Factual Beliefs and Candidate Favorability
Authors
Brendan Nyhan
Ethan Porter
Jason Reifler
Thomas J. Wood
Publication date
21-01-2019
Publisher
Springer US
Published in
Political Behavior / Issue 3/2020
Print ISSN: 0190-9320
Electronic ISSN: 1573-6687
DOI
https://doi.org/10.1007/s11109-019-09528-x

Other articles of this Issue 3/2020

Political Behavior 3/2020 Go to the issue

Premium Partner