Modeling the Detection of Textual Cyberbullying

Authors

  • Karthik Dinakar Massachusetts Institute of Technology
  • Roi Reichart Hebrew University of Jerusalem
  • Henry Lieberman Massachusetts Institute of Technology

DOI:

https://doi.org/10.1609/icwsm.v5i3.14209

Abstract

The scourge of cyberbullying has assumed alarming proportions with an ever-increasing number of adolescents admitting to having dealt with it either as a victim or as a bystander. Anonymity and the lack of meaningful supervision in the electronic medium are two factors that have exacerbated this social menace. Comments or posts involving sensitive topics that are personal to an individual are more likely to be internalized by a victim, often resulting in tragic outcomes. We decompose the overall detection problem into detection of sensitive topics, lending itself into text classification sub-problems. We experiment with a corpus of 4500 YouTube comments, applying a range of binary and multiclass classifiers. We find that binary classifiers for individual labels outperform multiclass classifiers. Our findings show that the detection of textual cyberbullying can be tackled by building individual topic-sensitive classifiers.

Downloads

Published

2021-08-03

How to Cite

Dinakar, K., Reichart, R., & Lieberman, H. (2021). Modeling the Detection of Textual Cyberbullying. Proceedings of the International AAAI Conference on Web and Social Media, 5(3), 11-17. https://doi.org/10.1609/icwsm.v5i3.14209