skip to main content
10.1145/3447535.3462637acmconferencesArticle/Chapter ViewAbstractPublication PageswebsciConference Proceedingsconference-collections
research-article

Understanding the Effect of Deplatforming on Social Networks

Published:22 June 2021Publication History

ABSTRACT

Aiming to enhance the safety of their users, social media platforms enforce terms of service by performing active moderation, including removing content or suspending users. Nevertheless, we do not have a clear understanding of how effective it is, ultimately, to suspend users who engage in toxic behavior, as that might actually draw users to alternative platforms where moderation is laxer. Moreover, this deplatforming efforts might end up nudging abusive users towards more extreme ideologies and potential radicalization risks. In this paper, we set to understand what happens when users get suspended on a social platform and move to an alternative one. We focus on accounts active on Gab that were suspended from Twitter and Reddit. We develop a method to identify accounts belonging to the same person on these platforms, and observe whether there was a measurable difference in the activity and toxicity of these accounts after suspension. We find that users who get banned on Twitter/Reddit exhibit an increased level of activity and toxicity on Gab, although the audience they potentially reach decreases. Overall, we argue that moderation efforts should go beyond ensuring the safety of users on a single platform, taking into account the potential adverse effects of banning users on major platforms.

Skip Supplemental Material Section

Supplemental Material

PS5.1_ShizaAli_UnderstandingTheEffect_of_Deplatforming_onSocialNetworks_14_06_21.mp4

mp4

80.9 MB

References

  1. Kayode Sakariyah Adewole, Nor Badrul Anuar, Amirrudin Kamsin, Kasturi Dewi Varathan, and Syed Abdul Razak. 2017. Malicious accounts: Dark of the social networks. Journal of Network and Computer Applications 79 (2017).Google ScholarGoogle Scholar
  2. ADL. 2019. When Twitter Bans Extremists, GAB Puts Out the Welcome Mat. https://www.adl.org/blog/when-twitter-bans-extremists-gab-puts-out-the-welcome-mat.Google ScholarGoogle Scholar
  3. ADL. 2020. Gab and 8chan: Home to Terrorist Plots Hiding in Plain Sight. https://www.adl.org/resources/reports/gab-and-8chan-home-to-terrorist-plots-hiding-in-plain-sight.Google ScholarGoogle Scholar
  4. Max Aliapoulios, Emmi Bevensee, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, and Savvas Zannettou. 2021. An Early Look at the Parler Online Social Network. arXiv preprint arXiv:2101.03820(2021).Google ScholarGoogle Scholar
  5. Wafa Alorainy, Pete Burnap, Han Liu, Amir Javed, and Matthew L Williams. 2018. Suspended accounts: A source of Tweets with disgust and anger emotions for augmenting hate speech data sample. In ICMLC.Google ScholarGoogle Scholar
  6. Pinkesh Badjatiya, Shashank Gupta, Manish Gupta, and Vasudeva Varma. 2017. Deep Learning for Hate Speech Detection in Tweets. ArXiv:1706.00188 (2017).Google ScholarGoogle Scholar
  7. Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The Pushshift Reddit Dataset. In ICWSM.Google ScholarGoogle Scholar
  8. Leo Breiman. 2001. Random forests. Machine Learning 45, 1 (2001).Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Pete Burnap and Matthew L. Williams. 2015. Cyber Hate Speech on Twitter: An Application of Machine Classification and Statistical Modeling for Policy and Decision Making. Policy & Internet 7, 2 (2015). arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/poi3.85Google ScholarGoogle Scholar
  10. Sissi Cao. 2019. Google’s AI Hate Speech Detector Has a ‘Black Twitter’ Problem: Study | Observer. https://observer.com/2019/08/google-ai-hate-speech-detector-black-racial-bias-twitter-study/Google ScholarGoogle Scholar
  11. Eshwar Chandrasekharan, Umashanthi Pavalanathan, Anirudh Srinivasan, Adam Glynn, Jacob Eisenstein, and Eric Gilbert. 2017. You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech. Proc. ACM Hum.-Comput. Interact. 1, CSCW, Article 31(2017).Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Despoina Chatzakou, Nicolas Kourtellis, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, and Athena Vakali. 2017. Mean birds: Detecting aggression and bullying on twitter. In ACM WebSci.Google ScholarGoogle Scholar
  13. Devin Coldewey. 2019. Racial bias observed in hate speech detection algorithm from Google | TechCrunch. https://techcrunch.com/2019/08/14/racial-bias-observed-in-hate-speech-detection-algorithm-from-google/Google ScholarGoogle Scholar
  14. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. In ICWSM.Google ScholarGoogle Scholar
  15. Emiliano De Cristofaro, Arik Friedman, Guillaume Jourjon, Mohamed Ali Kaafar, and M Zubair Shafiq. 2014. Paying for likes? understanding facebook like fraud using honeypots. In ACM IMC.Google ScholarGoogle Scholar
  16. Pablo Delgado. 2019. How El País used AI to make their comments section less toxic. https://blog.google/outreach-initiatives/google-news-initiative/how-el-pais-used-ai-make-their-comments-section-less-toxic/.Google ScholarGoogle Scholar
  17. Zhenyun Deng, Xiaoshu Zhu, Debo Cheng, Ming Zong, and Shichao Zhang. 2016. Efficient kNN classification algorithm for big data. Neurocomputing 195(2016).Google ScholarGoogle Scholar
  18. Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Grbovic, Vladan Radosavljevic, and Narayan Bhamidipati. 2015. Hate speech detection with comment embeddings. In WWW Companion.Google ScholarGoogle Scholar
  19. Mai ElSherief, Shirin Nilizadeh, Dana Nguyen, Giovanni Vigna, and Elizabeth Belding. 2018. Peer to peer hate: Hate speech instigators and their targets. In ICWSM.Google ScholarGoogle Scholar
  20. David Gilbert. 2019. Here’s How Big Far Right Social Network Gab Has Actually Gotten. https://www.vice.com/en_uk/article/pa7dwg/heres-how-big-far-right-social-network-gab-has-actually-become.Google ScholarGoogle Scholar
  21. Google. 2020. Perspective API. https://www.perspectiveapi.com.Google ScholarGoogle Scholar
  22. Chris Grier, Kurt Thomas, Vern Paxson, and Michael Zhang. 2010. @spam: the underground on 140 characters or less. In ACM CCS.Google ScholarGoogle Scholar
  23. Hussam Habib, Maaz Bin Musa, Fareed Zaffar, and Rishab Nithyanand. 2019. To Act or React? Investigating Proactive Strategies For Online Community Moderation. arXiv:1906.11932 (2019). arXiv:1906.11932v1Google ScholarGoogle Scholar
  24. Ghayda Hassan, Sébastien Brouillette-Alarie, Séraphin Alava, Divina Frau-Meigs, Lysiane Lavoie, Arber Fetiu, Wynnpaul Varela, Evgueni Borokhovski, Vivek Venkatesh, Cécile Rousseau, 2018. Exposure to extremist online content could lead to violent radicalization: A systematic review of empirical evidence. International journal of developmental science 12, 1-2 (2018), 71–88.Google ScholarGoogle Scholar
  25. Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. Deceiving Google’s Perspective API Built for Detecting Toxic Comments. arXiv:1702.08138 (2017).Google ScholarGoogle Scholar
  26. JingMin Huang, Gianluca Stringhini, and Peng Yong. 2015. Quit playing games with my heart: Understanding online dating scams. In International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019. ”Did You Suspect the Post Would Be Removed?”: Understanding User Reactions to Content Removals on Reddit. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 192(2019).Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Jamileh Kadivar. 2017. Online radicalization and social media: A case study of Daesh. International Journal of Digital Television 8, 3 (2017), 403–422.Google ScholarGoogle ScholarCross RefCross Ref
  29. Imrul Kayes, Nicolas Kourtellis, Daniele Quercia, Adriana Iamnitchi, and Francesco Bonchi. 2015. The Social World of Content Abusers in Community Question Answering. In WWW.Google ScholarGoogle Scholar
  30. Mohammed Korayem and David J Crandall. 2013. De-Anonymizing Users Across Heterogeneous Social Computing Platforms.. In ICWSM.Google ScholarGoogle Scholar
  31. Tarald O. Kvalseth. 1989. Note on Cohen’s Kappa. Psychological Reports 65, 1 (1989). arXiv:https://doi.org/10.2466/pr0.1989.65.1.223Google ScholarGoogle Scholar
  32. Enrico Mariconti, Jeremiah Onaolapo, Syed Ahmad, Nicolas Nikiforou, Manuel Egele, Nick Nikiforakis, and Gianluca Stringhini. 2017. What’s in a Name?: Understanding Profile Name Reuse on Twitter. In WWW.Google ScholarGoogle Scholar
  33. Binny Mathew, Ritam Dutt, Pawan Goyal, and Animesh Mukherjee. 2019. Spread of Hate Speech in Online Social Media. In ACM WebSci.Google ScholarGoogle Scholar
  34. Binny Mathew, Punyajoy Saha, Hardik Tharad, Subham Rajgaria, Prajwal Singhania, Suman Kalyan Maity, Pawan Goyal, and Animesh Mukherje. 2019. Thou shalt not hate: Countering Online Hate Speech. In ICWSM.Google ScholarGoogle Scholar
  35. Ryan McGrath. 2020. Twython. https://twython.readthedocs.io/en/latest/.Google ScholarGoogle Scholar
  36. Mainack Mondal, Leandro Silva, and Fabrício Benevenuto. 2017. A Measurement Study of Hate Speech in Social Media. In WWW.Google ScholarGoogle Scholar
  37. Jack Morse. 2020. Gab Chat ’likely’ to be used by white extremists, according to police. https://mashable.com/article/law-enforcement-documents-violent-white-extremists-encrypted-gab-chat/?europe=true.Google ScholarGoogle Scholar
  38. Edward Newell, David Jurgens, Haji Mohammad Saleem, Hardik Vala, Jad Sassine, Caitrin Armstrong, and Derek Ruths. 2016. User Migration in Online Social Networks: A Case Study on Reddit During a Period of Community Unrest. In Proceedigns of the Tenth International AAAI Conference on Web and Social Media(ICWSM).Google ScholarGoogle Scholar
  39. Suphakit Niwattanakul, Jatsada Singthongchai, Ekkachai Naenudorn, and Supachanun Wanapu. 2013. Using of Jaccard coefficient for keywords similarity. Proceedings of the international multiconference of engineers and computer scientists 1, 6 (2013).Google ScholarGoogle Scholar
  40. Antonis Papasavva, Jeremy Blackburn, Gianluca Stringhini, Savvas Zannettou, and Emiliano De Cristofaro. 2020. ”Is it a Qoincidence?”: A First Step Towards Understanding and Characterizing the QAnon Movement on Voat. co. arXiv preprint arXiv:2009.04885(2020).Google ScholarGoogle Scholar
  41. Rohan DW Perera, Sruthy Anand, KP Subbalakshmi, and R Chandramouli. 2010. Twitter analytics: Architecture, tools and analysis. In Milcom.Google ScholarGoogle Scholar
  42. Daniele Perito, Claude Castelluccia, Mohamed Ali Kaafar, and Pere Manils. 2011. How unique and traceable are usernames?. In International Symposium on Privacy Enhancing Technologies Symposium. Springer, 1–17.Google ScholarGoogle ScholarCross RefCross Ref
  43. Helen Christy Powell and Bennett Clifford. 2019. De-platforming and the Online Extremist’s Dilemma. https://www.lawfareblog.com/de-platforming-and-online-extremists-dilemma.Google ScholarGoogle Scholar
  44. Pushshift. 2020. Reddit Statistics. pushshift.io.Google ScholarGoogle Scholar
  45. Pushshift. 2020. Search Twitter Users and Discover Interesting Accounts. https://pushshift.io/twitter-user-search/.Google ScholarGoogle Scholar
  46. Adam K. Raymond. 2018. What We Know About Robert Bowers, Alleged Pittsburgh Synagogue Shooter. https://nymag.com/intelligencer/2018/10/what-we-know-about-robert-bowers-alleged-synagogue-shooter.html.Google ScholarGoogle Scholar
  47. Manoel Ribeiro, Shagun Jhaver, Savvas Zannettou, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, and Robert West. 2020. Does Platform Migration Compromise Content Moderation?Google ScholarGoogle Scholar
  48. David Robinson, Ziqi Zhang, and Jonathan Tepper. 2018. Hate Speech Detection on Twitter: Feature Engineering v.s. Feature Selection. In ESWC Satellite Events.Google ScholarGoogle Scholar
  49. Richard Rogers. 2020. Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media. European Journal of Communication 35, 3 (2020).Google ScholarGoogle ScholarCross RefCross Ref
  50. Greer Ryan. 2020. Weighing the Value and Risks of Deplatforming. https://gnet-research.org/2020/05/11/weighing-the-value-and-risks-of-deplatforming/.Google ScholarGoogle Scholar
  51. Elizabeth Schumacher. 2018. Far-right social network Gab struggles after Pittsburgh attack. https://www.dw.com/en/far-right-social-network-gab-struggles-after-pittsburgh-attack/a-46065847.Google ScholarGoogle Scholar
  52. Leandro Silva, Mainack Mondal, Denzil Correa, Fabrício Benevenuto, and Ingmar Weber. 2016. Analyzing the targets of hate in online social media. arXiv:1603.07709 (2016).Google ScholarGoogle Scholar
  53. Damiano Spina, Julio Gonzalo, and Enrique Amigó. 2014. Learning similarity functions for topic detection in online reputation monitoring. In Proceedings of the 37th international ACM SIGIR conference on Research & development in information retrieval. 527–536.Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Gianluca Stringhini, Christopher Kruegel, and Giovanni Vigna. 2010. Detecting Spammers on Social Networks. In Proceedings of the 26th Annual Computer Security Applications Conference(ACSAC ’10).Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Philip H Swain and Hans Hauska. 1977. The decision tree classifier: Design and potential. IEEE Transactions on Geoscience Electronics 15, 3 (1977).Google ScholarGoogle ScholarCross RefCross Ref
  56. M. Swartz and A. Crooks. 2020. Comparison of Emoji Use in Names, Profiles, and Tweets. In 2020 IEEE 14th International Conference on Semantic Computing (ICSC). 375–380.Google ScholarGoogle Scholar
  57. Kurt Thomas, Chris Grier, Dawn Song, and Vern Paxson. 2011. Suspended accounts in retrospect: an analysis of twitter spam. In ACM IMC.Google ScholarGoogle Scholar
  58. Twitter. 2020. Help with locked or limited account.Google ScholarGoogle Scholar
  59. Svitlana Volkova and Eric Bell. 2017. Identifying Effective Signals to Predict Deleted and Suspended Accounts on Twitter Across Languages. In ICWSM.Google ScholarGoogle Scholar
  60. Yaoshu Wang, Jianbin Qin, and Wei Wang. 2017. Efficient approximate entity matching using jaro-winkler distance. In International Conference on Web Information Systems Engineering.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Zeerak Waseem and Dirk Hovy. 2016. Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter. In NAACL Student Research Workshop.Google ScholarGoogle Scholar
  62. Wayback Machine. 2020. Wayback Machine APIs – Internet Archive. https://archive.org/help/wayback_api.php.Google ScholarGoogle Scholar
  63. Janith Weerasinghe, Bailey Flanigan, Aviel Stein, Damon McCoy, and Rachel Greenstadt. 2020. The pod people: Understanding manipulation of social media popularity via reciprocity abuse. In The Web Conference.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Wired. 2020. The wheels are falling off the alt-right’s version of the internet. https://www.wired.co.uk/article/alt-right-internet-is-a-ghost-town-gab-voat-wrongthink.Google ScholarGoogle Scholar
  65. Savvas Zannettou, Barry Bradlyn, Emiliano De Cristofaro, Haewoon Kwak, Michael Sirivianos, Gianluca Stringini, and Jeremy Blackburn. 2018. What is Gab: A Bastion of Free Speech or an Alt-Right Echo Chamber. In WWW Companion.Google ScholarGoogle Scholar
  66. Savvas Zannettou, Mai ElSherief, Elizabeth Belding, Shirin Nilizadeh, and Gianluca Stringhini. 2020. Measuring and Characterizing Hate Speech on News Websites. In ACM WebSci.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    WebSci '21: Proceedings of the 13th ACM Web Science Conference 2021
    June 2021
    328 pages
    ISBN:9781450383301
    DOI:10.1145/3447535

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 22 June 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate218of875submissions,25%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format