1 Introduction
Social media platforms face numerous societal, ethical, and political issues. Online extremism (Spiekermann et al.
2022), disinformation campaigns (Starbird et al.
2019), hate speech (Oksanen et al.
2020), and cyberbullying (Chan et al.
2019) are just a few examples of the social media related problems. Social media platforms have responded by implementing various content moderation mechanisms to govern communication on social media platforms (Grimmelmann
2015). Content moderation is a rapidly growing US$ 9.8 Bn market (Bloomberg
2022; Wankhede
2022). Particularly algorithmic content moderation is an increasingly popular approach (Katzenbach
2021). Algorithmic content moderation encompasses platform design decisions that dictate how community members interact with one another and determine who gets to see which content (Duffy and Meisner
2022; Zeng and Kaye
2022). Algorithmic content moderation offers scalable, automated systems that classify user-generated content to inform governance decisions (e.g., removal, geoblocking, account takedown) (Gorwa et al.
2020; Grimmelmann
2015).
The current discussion on content moderation pays little attention to shadowbanning (Gillespie
2022a; Gillespie et al.
2020). Shadowbanning secretly demotes or suppresses visibility of users, content, or groups without alerting the affected entity (Gillespie
2022a). A recent survey of 1,000 U.S. social media users found that about 10% of respondents – typically non-cisgendered, Hispanic, or Republican users report being shadowbanned across all major social media platforms like Facebook, Twitter, Instagram, Reddit, and TikTok (Nicholas
2022). Shadowbanning is the conceptual counterpart to platform’s amplification of problematic content for the sake of boosting engagement through recommender algorithms (Gillespie
2022a). Social media platforms generally avoid using the term shadowbanning as part of their content moderation mechanisms. Instead, they refer to it as visibility reduction techniques (Gillespie
2022b). Social media platforms have good reasons to use shadowbanning as it allows them to contain unwanted content without releasing information that would help malicious actors adjust their tactics and avoid detection (e.g., spam bots), to mitigate access to undesirable content (e.g., suicide, pro-eating disorder) (Nicholas
2022), or to avoid polarization and public outcry resulting from certain content moderation decisions (Gillespie
2022a).
Shadowbanning is also heavily scrutinized, predominantly for its opacity. Shadowbanning prevents users from correcting or disputing content moderation decisions (Nicholas
2022), leaving users left to speculate about whether they have been shadowbanned (Delmonaco et al.
2024) and unclear about the criteria that trigger shadowbanning (Elmimouni et al.
2024). Shadowbanning is accused of systematic bias against minorities (Duffy and Meisner
2022). In a recent content moderation survey that oversampled marginalized identities (i.e., racial and ethnic minorities, LGBTQ + people, trans and/or nonbinary people), 21.78% of respondents reported experiencing shadowbans (Delmonaco et al.
2024). Subjects of shadowbanning report mental and emotional harm (Nicholas
2022) ranging from feelings of frustration, sadness (Delmonaco et al.
2024), marginalization, anxiety, and helplessness (Elmimouni et al.
2024), leading to self-censorship, withdrawal from social media, and financial damages (Delmonaco et al.
2024). The potential ramifications of shadowbanning are expected to extend far beyond the silenced individuals or minority groups directly affected. This mechanism is believed to erode trust and confidence in social media platforms, fostering an environment conducive to conspiracy theories (Chen and Zaman
2024). For instance, shadowbanning fuels beliefs that platforms hold biased agendas, such as aligning with specific governments (e.g., “platforms align with the state of Israel”). Similarly, shadowbanning can exacerbate societal polarization by filtering certain opinions or individuals from public discourse. This can bias the process of forming public opinion, for example, when restricting pro-Palestinian voices on Facebook during the Israel-Hamas war (Elmimouni et al.
2024) or TikTok’s suppression of #BlackLivesMatter and LGBTQ + content (Delmonaco et al.
2024). Finally, shadowbanning can be weaponized by malicious actors to silence dissenting voices (Nicholas
2022), undermining open dialogue and empowering those who seek to manipulate online discourse.
A balanced and informed discussion of shadowbanning is urgently needed. Related research is still nascent and – to the best of our knowledge – absent in the field of Information Systems (IS). The objective of this article is to introduce IS practitioners and researchers to shadowbanning. By introducing shadowbanning, we aim to make three key contributions. First, we contribute to the emergent literature that raises awareness for this opaque content moderation mechanism (Gillespie
2022a). Shadowbanning complements existing IS research on content moderation beyond the more commonly discussed forms of annotating (He et al.
2024; Kim and Dennis
2019; Kim et al.
2019), banning (Russo et al.
2023), blocking (McDonald
2022), and deplatforming (Keller
2019). But also other related IS research on platform governance (Halckenhaeusser et al.
2020), algorithmic audiencing (Riemer and Peter
2021), and algorithmic control (Benlian et al.
2022) ought to consider the role of algorithms in secretly demoting content – in addition to their augmenting, amplifying, and serendipitous effects (Milli et al.
2023). Second, we offer conceptual clarity into what constitutes shadowbanning. Building a common understanding of shadowbanning ought to help bridge the gap between social media platforms who avoid the term shadowbanning (Gillespie
2022b), users who speculate whether they have been subjected to this practice (Elmimouni et al.
2024), researchers who try investigate this phenomenon (Jaidka et al.
2023), and lawmakers who need to understand this mechanism to devise meaningful regulation (Nicholas
2023). Third, we outline ways in which information systems research with its focus on sociotechnical systems can help address and inform the related conversation around content moderation, censorship, freedom of expression to contribute to society and make the online environment safer for everyone (Sarker et al.
2019; Spiekermann et al.
2022).
4 Outlook
Social media platforms use algorithms to control user attention, a key resource in our increasingly digital world (Zeng and Kaye
2022). A lot has been written on how these algorithms are designed to maximize user engagement, promoting controversial or provocative content on the fringes of mainstream discourse (Zuckerberg
2021) and the question of whether these algorithms induce societal polarization (Bakshy et al.
2015; Guess et al.
2023; Robertson et al.
2023), promote online extremism (Risius et al.
2024), or form filter bubbles and echo chambers (Bruns
2021). Meanwhile, the opposite use of algorithms to demote, hide, and reduce the visibility of content is mostly disregarded (Gillespie
2022a). Shadowbanning reduces or suppresses the visibility and reach of content, users, or groups without notifying the affected party. These algorithms enable platforms to obscurely organize content by demoting and hiding content or users instead of advertently blocking or deleting them (Merrer et al.
2021). Shadowbanning has been found to disadvantage marginalized groups and has severe ramifications for individuals, communities, and society (Nicholas
2022). This article aims to change the outlook for shadowbanning in three regards.
First, given its opaque character, we aim to raise awareness for the issue of shadowbanning among the public, researchers, and regulators. We argue that shadowbanning should be part of the current conversations around (algorithmic) content moderation (Gorwa et al.
2020; Grimmelmann
2015), algorithmic audiencing and free speech (Riemer and Peter
2021), algorithmic biases (Spiekermann et al.
2022), and algorithmic control (Benlian et al.
2022).
Second, users who report shadowbanning are often met with “black box gaslighting” (Cotter
2021). This article compiles various forms of evidence for the prevalence of shadowbanning. We therefore join calls to move past the red herring question whether or not shadowbanning exists (Gillespie
2022a,
2023; Nicholas
2023). While we recognize the importance of developing ways to detect shadowbanning, we need to expand the focus on its societal implications, ethical considerations of (non)acceptable shadowbanning, and its inherent trade-offs (e.g., between transparency vs. opacity, level of user activity vs. quality of content, or nurturing vs. punishing content moderation) (Jiang et al.
2023).
Third, shadowbanning lacks conceptual clarity which facilitates the politicization and conspiratorial theorizing (Gillespie
2022b; Nicholas
2023). This allows platforms to misconstrue and then deny shadowbanning (Cotter
2021). It also allows lawmakers to use shadowbanning for weaponizing free speech regulation (e.g., current supreme court case
Moody v. NetChoice, LLC) (Nicholas
2023). We have witnessed the politicization and weaponization of other insufficiently defined issues such as fake news before (Kaye
2019). Acc ordingly, some experts argue to abandon the term shadowbanning altogether and move to a less contended term (e.g., visibility reduction or undisclosed content moderation) (Gillespie
2022b; Nicholas
2023). However, given the great public awareness for the issue, we are of the opinion that scientists ought to remain part of the conversation and offer scientific insights. Hence, we aim to offer conceptual clarity on what constitutes shadowbanning and hope to inspire more dedicated research to inform public debate.