Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the First Multidisciplinary International Symposium, MISDOOM 2019, held in Hamburg, Germany, in February/March 2019.
The 14 revised full papers were carefully reviewed and selected from 21 submissions. The papers are organized in topical sections named: human computer interaction and disinformation, automation and disinformation, media and disinformation.

Inhaltsverzeichnis

Frontmatter

Human Computer Interaction and Disinformation

Frontmatter

Human and Algorithmic Contributions to Misinformation Online - Identifying the Culprit

Abstract
In times of massive fake news campaigns in social media, one may ask who is to blame for the spread of misinformation online. Are humans, in their limited capacity for rational self-reflection or responsible information use, guilty because they are the ones falling for the misinformation? Or are algorithms that provide the basis for filter bubble phenomena the cause of the rise of misinformation in particular in the political public discourse? In this paper, we look at both perspectives and see how both sides contribute to the problem of misinformation and how underlying metrics shape the problem.
André Calero Valdez

Between Overload and Indifference: Detection of Fake Accounts and Social Bots by Community Managers

Abstract
In addition to the increased opportunities for citizens to participate in society, participative online journalistic platforms offer opportunities for the dissemination of online propaganda through fake accounts and social bots. Community managers are expected to separate real expressions of opinion from manipulated statements through fake accounts and social bots. However, little is known about the criteria by which managers make the distinction between “real” and “fake” users. The present study addresses this gap with a series of expert interviews. The results show that community managers have widespread experience with fake accounts, but they have difficulty assessing the degree of automation. The criteria by which an account is classified as “fake” can be described along a micro-meso-macro structure, whereby recourse to indicators at the macro level is barely widespread, but is instead partly stereotyped, where impression-forming processes at the micro and meso levels predominate. We discuss the results with a view to possible long-term consequences for collective participation.
Svenja Boberg, Lena Frischlich, Tim Schatto-Eckrodt, Florian Wintterlin, Thorsten Quandt

Use and Assessment of Sources in Conspiracy Theorists’ Communities

Abstract
The endemic spread of misinformation online has become a subject of study for many academic disciplines. Part of the emerging literature on this topic has shown that conspiracy theories (CTs) are closely related to this phenomenon. One of the strategies deployed to combat this online misinformation is confronting users with corrective information, often drawn from mainstream media outlets. This study tries to answer the questions (I) whether there are online-communities that exclusively consume conspiracy theorist media and (II) how these communities use information sources from the mainstream. The results of our explorative, large-scale content analysis show that even in conspiracy theorist communities, mainstream media sources are being used very similar to sources from the conspiracy theorist media spectrum, thus not reaching any of their assumed corrective potential.
Tim Schatto-Eckrodt, Svenja Boberg, Florian Wintterlin, Thorsten Quandt

Credibility Development with Knowledge Graphs

Abstract
Detection of misinformation online requires understanding both the sources and content of information. While a variety of supervised learning methods have been proposed for automated fact checking with respect to the information content of media, the source is usually not taken into account. To address this gap in existing methods, we describe a novel framework for validating online content based on a knowledge graph of media content and an attribution graph of media sources. This approach enables decision makers to identify factual information and supports counter disinformation operations by tracing the spread of disinformation across reliable and unreliable outlets. We have found that tracking knowledge provenance is critical to assessing the credibility of that knowledge. In addition to building a knowledge graph of fact triples (subject, verb, object), we construct an attribution graph composed of links between all extracted facts and their sources on which we apply our main credibility reasoning mechanism, belief propagation. Analysis of credibility based on sources best captures reliable knowledge generation processes such as science, legal trials, and investigative reporting. In these domains there is a process for identifying experts and coming to consensus about the validity of claims to establish facts. Our method models these processes in news media by considering the relations between credible information and reliable sources.
James P. Fairbanks, Natalie Fitch, Franklin Bradfield, Erica Briscoe

Automated Detection of Nostalgic Text in the Context of Societal Pessimism

Abstract
In online media environments, nostalgia can be used as important ingredient of propaganda strategies, specifically, by creating societal pessimism. This work addresses the automated detection of nostalgic text as a first step towards automatically identifying nostalgia-based manipulation strategies. We compare the performance of standard machine learning approaches on this challenge and demonstrate the successful transfer of the best performing approach to real-world nostalgia detection in a case study.
Lena Clever, Lena Frischlich, Heike Trautmann, Christian Grimme

What Is Abusive Language?

Integrating Different Views on Abusive Language for Machine Learning
Abstract
Abusive language has been corrupting online conversations since the inception of the internet. Substantial research efforts have been put into the investigation and algorithmic resolution of the problem. Different aspects such as “cyberbullying”, “hate speech” or “profanity” have undergone ample amounts of investigation, however, often using inconsistent vocabulary such as “offensive language” or “harassment”. This led to a state of confusion within the research community. The inconsistency can be considered an inhibitor for the domain: It increases the risk of unintentional redundant work and leads to undifferentiated and thus hard to use and justifiable machine learning classifiers. To remedy this effect, this paper introduces a novel configurable, multi-view approach to define abusive language concepts.
Marco Niemann, Dennis M. Riehle, Jens Brunk, Jörg Becker

Automation and Disinformation

Frontmatter

Detecting Malicious Social Bots: Story of a Never-Ending Clash

Abstract
Recently, studies on the characterization and detection of social bots were published at an impressive rate. By looking back at over ten years of research and experimentation on social bots detection, in this paper we aim at understanding past, present, and future research trends in this crucial field. In doing so, we discuss about one of the nastiest features of social bots – that is, their evolutionary nature. Then, we highlight the switch from supervised bot detection techniques – focusing on feature engineering and on the analysis of one account at a time – to unsupervised ones, where the focus is on proposing new detection algorithms and on the analysis of groups of accounts that behave in a coordinated and synchronized fashion. These unsupervised, group-analyses techniques currently represent the state-of-the-art in social bot detection. Going forward, we analyze the latest research trend in social bot detection in order to highlight a promising new development of this crucial field.
Stefano Cresci

The Markets of Manipulation: The Trading of Social Bots on Clearnet and Darknet Markets

Abstract
Since the Brexit vote and the 2016 U.S. election, much has been speculated about the use of so-called social bots, (semi-)automatized pseudo-users in online media, as political manipulation tools. Accumulating global evidence shows that pseudo-users are used for different purposes, such as the amplification of political topics or the simulation of large numbers of followers. Social bots, as a (semi-)automated pseudo-user type, are part of a larger infrastructure, among others, entailing network access, fake accounts, and hosting services. Users and providers of social bots and their infrastructure can differ. Thus, it is plausible that a digital goods market has emerged for the exchange of social bots and infrastructure components. The present study used an ethnographic approach to study the accessibility, availability, and prices for pseudo-users and social bots on markets in the (German- and English-language) Clearnet and Darknet. The results show that an infrastructure for digital manipulation is widely available online, and that the tools for artificial content or connectedness amplification are easily accessible for lay users and are cheap on Clearnet and Darknet markets.
Lena Frischlich, Niels Göran Mede, Thorsten Quandt

Inside the Tool Set of Automation: Free Social Bot Code Revisited

Abstract
Social bots have recently gained attention in the context of public opinion manipulation on social media platforms. While a lot of research effort has been put into the classification and detection of such automated programs, it is still unclear how technically sophisticated those bots are, which platforms they target, and where they originate from. To answer these questions, we gathered repository data from open source collaboration platforms to identify the status-quo of social bot development as well as first insights into the overall skills of publicly available bot code.
Dennis Assenmacher, Lena Adam, Lena Frischlich, Heike Trautmann, Christian Grimme

Analysis of Account Engagement in Onsetting Twitter Message Cascades

Abstract
In this work we investigate the engagement of Twitter accounts in the starting phase of reaction cascades, i.e., in the follow-up stream of an original tweet. In a first case study, we focus on a selection of very popular Twitter users from politics and society. We find a small but constantly active set of seemingly automated accounts in the onset of cascades that may contribute to the multiplication of content–especially for well-known populist politicians.
Philipp Kessling, Christian Grimme

Media and Disinformation

Frontmatter

How Facebook and Google Accidentally Created a Perfect Ecosystem for Targeted Disinformation

Abstract
Online platforms providing information and media content follow certain goals and optimize for certain metrics when deploying automated decision making systems to recommend pieces of content from the vast amount of media items uploaded to or indexed by their platforms every day. These optimization metrics differ markedly from, for example, the so-called news factors journalists traditionally use to make editorial decisions. Social networks, video platforms and search engines thus create content hierarchies that reflect not only user interest but also their own monetization goals. This sometimes has unintended, societally highly problematic effects: Optimizing for metrics like dwell time, watch time or “engagement” can promote disinformation and propaganda content. This chapter provides examples and discusses relevant mechanisms and interactions.
Christian Stöcker

Between Mainstream and Alternative – Co-orientation in Right-Wing Populist Alternative News Media

Abstract
Alternative news media with a right-wing populist leaning are flourishing. They pitch themselves as opposition to a hegemonically interpreted mainstream news media system. Yet, at the same time, they rely on the so criticized others to justify their own existence. Using a co-orientation framework, the current study asked in how far right-wing populist alternative news media orient themselves towards the mainstream. Using a qualitative content analysis of all 658 websites referenced by a popular right-wing conspiracy-theoretical YouTuber in Germany, we demonstrate that distinct source types were quoted. References ranged from mainstream news media up to ultra-right wing truther blogs. A quantitative examination of the content-analytical categories confirmed significant differences between mainstream news media and right-wing populist blogs, with special interest and alternative news media ranging in between these poles. Alternative news media were overall found to orient themselves stylistically strongly towards the mainstream but less so regarding their content selection. Particularly, the top sources, accounting for over 76% of all references, were mostly rooted in the alternative ultra-right-wing ecosystem. In sum, our analyses showed how stylistic co-orientation is used to build a bridge towards the mainstream while content-related co-orientation towards other ultra-right-wing alternative sources allows for validating one’s own right-wing populist worldview.
Lena Frischlich, Johanna Klapproth, Felix Brinkschulte

Maintaining Journalistic Authority

The Role of Nigerian Newsrooms in “Post-truth Era”
Abstract
This study provides an insight into the practice of fact-checking in Nigerian newsrooms. Theoretically, this study draws upon the ability of the Nigerian media to maintain its journalistic authority in this supposed “post-truth era”. Using the 2019 Nigerian presidential elections as a lens, this study applies qualitative thematic analysis in examining 28 fact-checked election stories by 15 Nigerian newsrooms under the aegis of CrossCheck Nigeria. This study is guided by the overarching question: How do the Nigerian newsrooms maintain its journalistic authority? Findings show that the Nigerian media maintain its journalistic authority through the following means: technological expertise, access to sources, spokespersons of real-life events and mastery of knowledge. This novel study shows how fact-checking activities by the media can maintain journalistic authority.
Kelechi Okechukwu Amakoh

State Propaganda on Twitter

How Iranian Propaganda Accounts Have Tried to Influence the International Discourse on Saudi Arabia
Abstract
In recent years, a variety of studies has discussed the use of social media in the context of misinformation, fake news and manipulation of public opinion. Based on two data sets published by Twitter, including more than 1.7 million English-language tweets, this study focuses on the question whether Iranian propaganda accounts tried to influence the international online debate on the country’s biggest rival, Saudi Arabia. The rivalry between both countries has been an ongoing fight deeply rooted in a regional, geopolitical, ideological and somewhat religious conflict. An analysis of the tweets published by the accounts which are believed to be connected to Iranian state-backed information operations has shown that they have tried to establish an anti-Saudi narrative on Twitter. Different strategies, including the spread of biased hashtags or retweeting internal and external propaganda sources, were used to promote their agenda. The propaganda activity on Saudi Arabia was especially distinctive during specific time intervals, correlating with political events, but has regularly failed to manipulate the international discourse. Although some content that negatively mentioned Saudi Arabia was actively retweeted, the vast majority did not influence the social media debate on the Gulf state.
Bastian Kießling, Jan Homburg, Tanja Drozdzynski, Steffen Burkhardt

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise