The Persistence and Peril of Misinformation

Defining what truth means and deciphering how human brains verify information are some of the challenges to battling widespread falsehoods.

Anthropology Communications Psychology

Current Issue

This Article From Issue

November-December 2017

Volume 105, Number 6
Page 372

DOI: 10.1511/2017.105.6.372

Misinformation—both deliberately promoted and accidentally shared—is perhaps an inevitable part of the world in which we live, but it is not a new problem. People likely have lied to one another for roughly as long as verbal communication has existed. Deceiving others can offer an apparent opportunity to gain strategic advantage, to motivate others to action, or even to protect interpersonal bonds. Moreover, people inadvertently have been sharing inaccurate information with one another for thousands of years.

Michele Rosenthal

However, we currently live in an era in which technology enables information to reach large audiences distributed across the globe, and thus the potential for immediate and widespread effects from misinformation now looms larger than in the past. Yet the means to correct misinformation over time might be found in those same patterns of mass communication and of the facilitated spread of information.

Ad Right

Misinformation vs. Disinformation

Misinformation is concerning because of its potential to unduly influence attitudes and behavior, leading people to think and act differently than they would if they were correctly informed, as suggested by the research teams of Stephan Lewandowsky of the University of Bristol in the United Kingdom and Elizabeth Marsh of Duke University, among others. In other words, we worry that misinformation (or false information) might lead people to hold misperceptions (or false beliefs) and that these misperceptions, especially when they occur among large groups of people, may have downstream consequences for health, social harmony, and political life.

Concern about proving that any information is objectively true can complicate the distinction of misinformation from information.

Does misinformation require intentional deceit on the part of the presenter? Philosopher Jürgen Habermas, who taught at institutions such as Goethe University Frankfurt and the Max Planck Institute in Germany until formally retiring in the 1990s, focuses on a speaker’s intent to deceive to distinguish between misinformation and disinformation. Habermas views truth as only possible collectively among people as a product of consensus; one’s collegial participation in such collective understanding also matters. Misinformation from such a perspective, then, is contentious information reflecting disagreement among people, whereas disinformation is more problematic, as it involves deliberate alienation or disempowerment of other people. Lewandowsky and his colleagues have carried forward this definition of disinformation as intentionally incorrect information.

photo Gage Skidmore/Flickr cc.

From a critical theory perspective, we might worry about the impossibility of proving that any information is objectively true, a concern that complicates any distinction of misinformation from information on the basis of the former being false and the latter being true. Nonetheless, we respectfully reject a worldview in which no degree of consensus among people— agreement as to what is true—is possible. If we allow that a claim acknowledged by consensus to be true is valid, we then can position misinformation as a category of claims for which there is at least substantial disagreement (or even a consensus for rejection) when their truthfulness is judged by the widest feasible range of observers. That approach would include disinformation as a special type of misinformation distinguished by the promoter’s intent.

From an ethical perspective, many people worry most about active promotion of disinformation. We think it is best, however, to continue to use the word misinformation, because we should acknowledge that false information can mislead people even if unintentionally promoted or mistakenly endorsed as the truth. This approach opens the door for certain claims to evolve from accepted information to become misinformation, and vice versa, as a function of a society’s changing consensus. (And, of course, such evolution of evidence can be useful.) We can define misinformation as claims that do not enjoy universal or near-universal consensus as being true at a particular moment in time on the basis of evidence.

Why Misinformation is Problematic

At least three observations related to misinformation in the contemporary mass-media environment warrant the attention of researchers, policy makers, and really everyone who watches television, listens to the radio, or reads information online. First of all, people who encounter misinformation tend to believe it, at least initially. Secondly, media systems often do not block or censor many types of misinformation before it appears in content available to large audiences. Thirdly, countering misinformation once it has enjoyed wide exposure can be a resource-intensive effort.

Knowing what happens when people initially encounter misinformation holds tremendous importance for estimating the potential for subsequent problems. Although individuals generally have considerable routine experience encountering information now considered to be false, the question of exactly how—and when—we mentally label information as true or false has garnered philosophical debate.

Wikimedia Commons

The dilemma is neatly summarized by a contrast between how the philosophers René Descartes and Baruch Spinoza described human information engagement centuries ago, with conflicting predictions that only recently have been empirically tested in robust ways. Descartes argued that a person only accepts or rejects information after considering its truth or falsehood; Spinoza argued that people accept all encountered information (or misinformation) by default and then subsequently verify or reject it through a separate process. In recent decades, empirical evidence from the research teams of Erik Asp of the University of Chicago and Daniel Gilbert at Harvard University, among others, has supported Spinoza’s account: People appear to encode all new information as if it were true, even if only momentarily, and later tag the information as being either true or false, a pattern that seems consistent with the observation that mental resources for skepticism physically reside in a different part of the brain than the resources used in perceiving and encoding.

People appear to encode all new information as if it were true, even if only momentarily, and later tag the information as being either true or false.

We also know from work by one of us (Southwell) and others that people judge source credibility as a cue in determining message acceptability and will turn to others for confirmation of the truth of a claim. If the people surrounding someone tend to initially believe misinformation, then the specter of network reinforcement is raised, meaning that the false claim becomes more difficult to debunk as it is believed by more people.

What about our claim that misinformation often can appear in electronic or print media without being preemptively blocked? One might consider the nature of regulatory structures in countries such as the United States: Regulatory agencies tend to focus on post hoc detection of broadcast information. Organizations such as the U.S. Federal Trade Commission, the Federal Election Commission, and the Food and Drug Administration (FDA) offer considerable monitoring and notification functions, but these roles typically do not involve preemptive censoring. The FDA oversees direct-to-consumer prescription drug advertising, for example, and has developed mechanisms such as the “Bad Ad” program, through which people can report advertising in apparent violation of FDA guidelines on drug risks and benefits presentation.

Such programs, although laudable and useful, do not guarantee that false advertising never appears on the airwaves and, moreover, do not prevent false news stories from appearing. In addition, as shown by one of us (Thorson), even misinformation that is successfully corrected can continue to affect attitudes.

Following on this last point, countering misinformation with new information requires effort not only to develop new content that is understandable but also to ensure adequate message exposure. As Robert Hornik of the University of Pennsylvania has argued, a communication campaign can succeed or fail, at least in part, as a function of exposure or lack thereof.

A campaign to correct misinformation, even if rhetorically compelling, requires resources and planning to accomplish necessary reach and frequency. For corrective information to be persuasive, audiences need to be able to comprehend it, which requires either effort to frame messages in ways that are understandable or effort to educate and sensitize audiences to the possibility of misinformation. That audiences might not be aware of the potential for misinformation also suggests the potential utility of media literacy efforts as early as elementary school. Even with journalists, pundits, and scholars pointing to the phenomenon of “fake news,” people often do not distinguish between demonstrably false stories and those based in fact when scanning and processing information.

Right: Everett Collection. Left: ZUMA Press, Inc./Alamy Stock Photo

We live at a time when widespread misinformation is common. Yet at this time many people also are passionately developing potential solutions and remedies. The journey forward undoubtedly will be a long one. The way in which media systems have developed in many democratic societies inherently calls for both vulnerability to occasional misinformation and robust systems to detect and address it.

Future remedies will require not only continued theoretical consideration but also the development and maintenance of consistent monitoring tools and a willingness among neighbors and fellow members of society to agree that some claims that find prominence on shared airwaves and in widely available media content are insufficiently based in scientific consensus and social reality and should be countered. Misinformation arises as a function of systems structure, human fallibility, and human information needs. To overcome the worst effects of the phenomenon, we will need coordinated efforts over time, rather than any singular, one-time panacea we could hope to offer.


Adapted, with permission, from Misinformation and Mass Audiences, edited by Brian G. Southwell, Emily A. Thorson, and Laura Sheble; © 2018 by the University of Texas Press.

Bibliography

  • Aikin, K. J., et al. 2015. Correction of overstatement and omission in direct-to-consumer prescription drug advertising. Journal of Communication 65:596–618.
  • Asp, E. W., and D. Tranel. 2012. False tagging theory: Toward a unitary account of prefrontal cortex function. In D. T. Stuss and R. T. Knight, eds. Principles of frontal lobe function. New York, NY: Oxford University Press, pp. 383–416.
  • Gilbert, D., R. Tafarodi, and P. Malone. 1993. You can’t not believe everything you read. Journal of Personality and Social Psychology 65:221–233.
    • Lewandowsky, S., U. K. H. Ecker, C. M. Seifert, N. Schwarz, and J. Cook. 2012. Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest 13:106–131.
    • Thorson, E. 2016. Belief echoes: The persistent effects of corrected misinformation. Political Communication 33:460–480.

American Scientist Comments and Discussion

To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.

If we re-share your post, we will moderate comments/discussion following our comments policy.