Skip to main content
Top

2024 | Book

Artificial Misinformation

Exploring Human-Algorithm Interaction Online

insite
SEARCH

About this book

This book serves as a guide to understanding the dynamics of AI in human contexts with a specific focus on the generation, sharing, and consumption of misinformation online. How do humans and AI interact? How is AI shaping our understanding of ourselves and our societies? What are the interaction mechanisms that govern how humans and algorithms contribute to misinformation online? And how do we bridge the gap between ethical considerations and practical realities to make responsible, reliable systems? Exploring these questions, the book empowers humans to make AI design choices that allow them meaningful control over AI and the online sphere. Calling for an interdisciplinary approach toward human-misinformation algorithmic interaction that focuses on building methods and tools that robustly deal with complex psychological/social phenomena, the book offers a compelling insight into the future of AI-based society.

Table of Contents

Frontmatter

The Cognitive Science of Misinformation: Why We Are Vulnerable, and How Misinformation Beliefs Are Formed/Maintained

Frontmatter
Chapter 1. Introduction: The Epistemology of Misinformation—How Do We Know What We Know
Abstract
The epidemic of misinformation has been identified as one of the most significant concerns in contemporary society. Misinformation is on the rise, and it seems that artificial intelligence (AI) is the primary conduit for it. AI is a double-edged sword for misinformation that can do good—but also worsen misinformation. Advances in machine learning and algorithms have brought to life a highly effective method for conveying misinformation. In the midst of a “misinfodemic,” the psychology of misinformation—existing bias, mental shortcuts, illusions, and confusions that encourage us to believe information that is not true—can tell us what misinformation is and how to prevent its detrimental effects, why we are vulnerable to misinformation, what affects whether corrections work, and what we have to do to combat misinformation. Misinformation concerns and potential methods for mitigating those threats can be discussed in terms of cognitive processes connected to perception, understanding, heuristics, sensemaking, cognitive processing, and decision-making.
Donghee Shin
Chapter 2. Misinformation and Algorithmic Bias
Abstract
What happens if the data fed to AI are biased? What happens if the response of a chatbot spreads misinformation? Unlike many people hope, AI is as biased as humans are. Bias can originate from various venues, including but not limited to the design and unintended or unanticipated use of the algorithm or algorithmic decisions about the way data are coded, framed, filtered, or analyzed to train machine learning. Algorithmic bias has been widely seen in advertising, content recommendations, and search engine results. Algorithmic prejudice has been found in cases ranging from political campaign outcomes to the proliferation of fake news and misinformation. It has also surfaced in health care, education, and public service, aggravating existing societal, socioeconomic, and political biases. These algorithm-induced biases can exert negative impacts on a range of social interactions, ranging from unintended privacy infringements to solidifying societal biases of gender, race, ethnicity, and culture. The significance of the data used in training algorithms should not be underestimated. Humans should play a part in the datafication of algorithms, as preventing the spread of misinformation is difficult by technology alone, especially considering the rate at which information can spread online.
Donghee Shin
Chapter 3. Misinformation, Extremism, and Conspiracies: Amplification and Polarization by Algorithms
Abstract
Misinformation can be a direct cause of radicalization due to its tendency to trigger strong emotions. Aggressive messages that arouse anxiety can be highly persuasive—messages that point to a threat, particularly one that is sensitive and socially hot, create a cognitive drive for more content about that threat and generate support for responsive action. This chapter critically examined the role that social media algorithms play in recommending extreme content. TikTok’s role in fostering radicalized content was examined by tracing how users become radicalized on TikTok and how its recommendation algorithms drive this radicalization. It identified the social, technological, and psychological factors that contribute to the radicalization of ideological biases on social media and proposed a conceptual lens through which to analyze and predict such radicalization. The results revealed that the pathways by which users access far-right content are manifold and that a large part of this can be ascribed to platform recommendations through a positive feedback loop. The results are consistent with the proposition that the generation and adoption of extreme content on TikTok largely reflect the user’s input and interaction with a platform. It is argued that some features of misinformation are likely to promote radicalization among users. It concludes how trends in artificial intelligence (AI)-based content systems are forged by an intricate combination of user interactions, platform intentions, and the interplay dynamics of a broader AI ecosystem.
Donghee Shin

How People View and Process Misinformation: How People Respond to Corrections of Misinformation

Frontmatter
Chapter 4. Misinformation, Paradox, and Heuristics: An Algorithmic Nudge to Counter Misinformation
Abstract
No one is completely immune to misinformation because of how human cognition is built and how misinformation takes advantage of it. Often, using nudges to help steer users into fact-checking the information is much more effective than detecting misinformation. This chapter presents empirical work on the design of nudge interventions in the context of misinformation. Applying the nudge principle to misinformation, it suggests that different cognitive biases that humans are vulnerable to can be leveraged for the design of algorithmic interventions that reduce the consumption and spread of misinformation. The findings from an experiment revealed significant main and interaction effects, indicating that algorithmic source effects are present in the process of nudge sensemaking. Misinformation sharing intention was largely lower for nonalgorithmic news than for algorithm-based news, but there was a greater drop in algorithmic news when nudging was employed. Moderation from algorithmic trust was found, and users’ trust in algorithmic media amplified the nudge effect only for news from algorithmic media and not nonalgorithmic online media sources. The results of our study confirm previous literature that underlined the role of nudging in influencing news sharing. Source credibility has an impact on misinformation sharing on social media, and nudge credibility encourages discerning and acknowledging misinformation. The findings contribute to the design implications of nudging interventions in the context of misinformation, as well as prototyping a range of nudging mechanisms with the goal of evaluating their proximal effects on human behavior in AI.
Donghee Shin
Chapter 5. Misinformation Processing Model: How Users Process Misinformation When Using Recommender Algorithms
Abstract
The diffusion of misinformation has garnered considerable attention in our society. As algorithms have been considered one of the major drivers behind the spread and amplification of misinformation, it is useful to understand the effects of these algorithms on misinformation sharing and the manner in which they spread it. This chapter examines the psychological, cognitive, and social factors involved in the processing of misinformation people receive through algorithms and artificial intelligence. Modeling cognitive processes has long been of interest for understanding user reasoning, and many theories from different fields have been formalized into cognitive models. Drawing on theoretical insights from information processing theory with the concept of diagnosticity, it examines how perceived normative values influence a user’s perceived diagnosticity and likelihood of sharing information and whether explainability further moderates this relationship. The findings showed that users with a high heuristic processing of normative values and positive diagnostic perception were more likely to proactively discern misinformation. Users with a high cognitive ability to understand information were more likely to discern it correctly and less likely to share misinformation online. When exposed to misinformation through algorithmic recommendations, users’ perceived diagnosticity of misinformation can be predicted accurately from their understanding of normative values. This perceived diagnosticity would then positively influence the accuracy and credibility of the misinformation. With this focus on misinformation processing, this chapter provides theoretical insights and relevant recommendations for firms to be more resilient in protecting themselves from the detrimental impact of misinformation.
Donghee Shin

How to Combat Misinformation Online Amid Growing Concerns and Build Trust

Frontmatter
Chapter 6. Misinformation and Diversity: Nudging Away from Misinformation Nudging Toward Diversity
Abstract
This chapter introduces the principle of diversity-aware AI and discusses the need to develop recommendation models to embed AI with diversity awareness to mitigate misinformation. Free and plural ideas are key to addressing misinformation and informing users. A key indicator of the healthy online ecosystem is the existence of diversity of ideas and others’ perspectives. Exposure to diverse sources of news promotes tolerance, social cohesion, and harmonious accord of different ideologies, perspectives, and cultures. Diversity in news recommender systems (NRS) is perceived as a major issue for the preservation of a healthy democratic discourse. In this light of importance, this chapter proposes a conceptual framework for personalized recommendation nudges that can promote diverse news consumption on online platforms. It empirically tests the effects of algorithmic nudges by examining how users make sense of algorithmic nudges and how nudges influence users’ views on personalization and attitudes toward news diversity. The findings show that algorithmic nudges play a key role in understanding normative values in NRS, which then influence the user’s intention to consume diverse news. The findings imply the personalization paradox that personalized news recommendations can enhance and decrease user engagement with the systems. This paradox provides conceptual and operational bases for diversity-aware NRS design, enhancing the diversity and personalization of news recommendations. It proposes a conceptual framework of algorithmic nudges and news diversity, and from there, we develop theoretically grounded paths for facilitating diversity and inclusion in NRS.
Donghee Shin
Chapter 7. Misinformation, Paradox, and Nudge: Combating Misinformation Through Nudging
Abstract
The rapid spread of misinformation online can be attributed to bias in human decision-making, facilitated by algorithmic processes. The area of human-computer interaction has contributed a mechanism of such biases that can be addressed by the design of system interventions. For example, the principle of nudging refers to sophisticated modifications in the choice architecture that can change user behaviors in desired or directed ways. This chapter discusses the design of nudging interventions in the context of misinformation, including a systematic review of the use of nudging in human-AI interaction that has led to a design framework. By using algorithms that work invisibly, nudges can be maneuvered in misinformation to individuals, and their effectiveness can be traced and attuned as the algorithm improves from user feedback based on a user’s behavior. It seeks to explore the potential of nudging in decreasing the chances of consuming and spreading misinformation. The key is how to ensure that algorithmic nudges are used in an effective way and whether the nudge could also help to achieve a sustainable way of life. This chapter discusses the principles and dimensions of the nudging effects of AI systems on user behavior in response to misinformation.
Donghee Shin

What Are the Implications of AI for Misinformation? The Challenges and Opportunities When Misinformation Meets AI

Frontmatter
Chapter 8. Misinformation and Inoculation: Algorithmic Inoculation Against Misinformation Resistance
Abstract
AI-enabled services, such as chatbots and generative systems, are often unable to generate correct information per user request, thus creating user resistance and preventing the smooth diffusion of AI services. Previous research has mostly addressed how to improve AI responses but fails to consider user resistance against misinformation from AI. Based on inoculation theory and a heuristic systematic model, this chapter discusses the cognitive mechanisms of inoculation effects on using AI chatbots by addressing questions on how users construe inoculation messages and how the messages influence users’ resistance against misinformation. How inoculation confers resistance to users provides important implications for theory and practice. The chapter found that inoculation messages alleviate the negative effects of misinformation from AI chatbots on user interaction. It renders a critical perspective of how the theory can be conceptually extended to misinformation and how the theoretical frame can be used practically.
Donghee Shin
Chapter 9. Misinformation and Generative AI: How Users Construe Their Sense of Diagnostic Misinformation
Abstract
ChatGPT has opened a new front in the fake news wars. This chapter is motivated by the rapidly improving capabilities and accessibility of generative AI and rapidly increasing misinformation problems. Misinformation is by no means a new phenomenon, yet its trend is highlighted by the emergence of AI. It might be useful to see misinformation in the context of a new and rapidly evolving AI landscape, which has facilitated the spread of unparalleled volumes of information at lightning speeds. This chapter discusses the misinformation effect by examining how users process and respond to misinformation in generative artificial intelligence (GenAI) contexts. Drawing on the heuristic-systematic model, it examines the factors influencing a user’s diagnosticity and likelihood of sharing information and whether explanatory heuristics moderate this relationship. The findings showed that users with high heuristic processing of ethical values and positive diagnostic perception were more likely to proactively discern misinformation than users with low heuristic processing and low diagnostic perception. When exposed to misinformation from GenAI, users’ construed diagnosticity of misinformation can be accurately predicted from their understanding of ethical values. With this focus on misinformation processing, this chapter provides theoretical insights and relevant recommendations for firms to be more resilient in protecting users from the detrimental impact of misinformation.
Donghee Shin
Chapter 10. Conclusion: Misinformation and AI—How Algorithms Generate and Manipulate Misinformation
Abstract
The growing prominence of deepfakes in the last several years has triggered an ongoing discussion of authenticity online and of the distinction between fact and fiction. Deepfakes, which use deep learning involving AI to generate videos or fake events, are highly realistic synthetic media that can be abused to threaten an organization’s brand; to impersonate leaders and financial officers; and to enable access to networks, communications, and sensitive information. The proliferation of deepfakes foreshadows a dubious, uncertain era defined by a fractured geopolitical landscape, ideological echo chambers, and mutual distrust. AI-based machine learning can amplify disinformation rather than dispelling it. The future online environment should reflect how a healthy society naturally operates rather than being driven by an algorithm that manipulates our attention to boost corporate profits. Although social media represents a legitimate ideal of democratizing information, this endeavor has been hijacked and subverted by the algorithmic logic and ad-driven model. To fulfill that normative aspiration, AI systems should ensure transparency, provide fair results, establish accountability, and operate under a clearly defined data governance policy.
Donghee Shin
Backmatter
Metadata
Title
Artificial Misinformation
Author
Donghee Shin
Copyright Year
2024
Electronic ISBN
978-3-031-52569-8
Print ISBN
978-3-031-52568-1
DOI
https://doi.org/10.1007/978-3-031-52569-8