Skip to main content
Erschienen in: Journal of Cloud Computing 1/2024

Open Access 01.12.2024 | Research

Innovative deep learning techniques for monitoring aggressive behavior in social media posts

verfasst von: Huimin Han, Muhammad Asif, Emad Mahrous Awwad, Nadia Sarhan, Yazeed Yasid Ghadi, Bo Xu

Erschienen in: Journal of Cloud Computing | Ausgabe 1/2024

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The study aims to evaluate and compare the performance of various machine learning (ML) classifiers in the context of detecting cyber-trolling behaviors. With the rising prevalence of online harassment, developing effective automated tools for aggression detection in digital communications has become imperative. This research assesses the efficacy of Random Forest, Light Gradient Boosting Machine (LightGBM), Logistic Regression, Support Vector Machine (SVM), and Naive Bayes classifiers in identifying cyber troll posts within a publicly available dataset. Each ML classifier was trained and tested on a dataset curated for the detection of cyber trolls. The performance of the classifiers was gauged using confusion matrices, which provide detailed counts of true positives, true negatives, false positives, and false negatives. These metrics were then utilized to calculate the accuracy, precision, recall, and F1 scores to better understand each model’s predictive capabilities. The Random Forest classifier outperformed other models, exhibiting the highest accuracy and balanced precision-recall trade-off, as indicated by the highest true positive and true negative rates, alongside the lowest false positive and false negative rates. LightGBM, while effective, showed a tendency towards higher false predictions. Logistic Regression, SVM, and Naive Bayes displayed identical confusion matrix results, an anomaly suggesting potential data handling or model application issues that warrant further investigation. The findings underscore the effectiveness of ensemble methods, with Random Forest leading in the cyber troll detection task. The study highlights the importance of selecting appropriate ML algorithms for text classification tasks in social media contexts and emphasizes the need for further scrutiny into the anomaly observed among the Logistic Regression, SVM, and Naive Bayes results. Future work will focus on exploring the reasons behind this occurrence and the potential of deep learning techniques in enhancing detection performance.
Hinweise

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

The digital era has ushered in an era of unprecedented connectivity, transforming social media into a pivotal platform for global communication and public discourse. This virtual interconnectedness, while facilitating a plethora of meaningful interactions, has also given rise to a pernicious phenomenon: cyberbullying [1]. Cyberbullying, defined as the use of digital platforms to intimidate, belittle, or harass individuals, poses a unique challenge in the realm of online safety due to the anonymity afforded by these platforms. Its implications are far-reaching, often resulting in severe psychological and emotional distress, surpassing the impact of traditional, physical bullying in its potential for harm [2]. The spectrum of cyberbullying encompasses various manifestations, including but not limited to, racism, sexism, and cyber aggression. Cyber aggression, in particular, denotes behaviors that are hostile or hateful, often motivated by discriminatory beliefs based on race, nationality, religion, gender, and other such factors [3, 4]. These digital acts of aggression are not bound by age or demographic, making them a universal concern [5]. With the voluminous flow of content on social media platforms - millions of Facebook posts and tweets generated every minute - the task of monitoring and mitigating offensive content becomes a Herculean endeavor [6]. Notably, a significant portion of these posts contains elements of offensive language or sentiment, necessitating robust mechanisms for detection and intervention [7, 8]. Conventional approaches to tackling this issue have primarily relied on machine learning models, utilizing techniques like support vector machines (SVM), logistic regression (LR), and naïve Bayes (NB) for text classification. However, these methods, focusing largely on textual features through mechanisms like term frequency-inverse document frequency (TF-IDF) and Word2Vec, often fall short of capturing the nuanced emotional context of the communications.
The emergence of innovative deep-learning techniques for monitoring aggressive behavior in social media posts represents a significant advancement in the field of digital communication and online safety. The significance of this development lies in its potential to address a critical and growing concern in the virtual landscape: the prevalence of cyber aggression and its detrimental impact on individuals and communities. As social media platforms have become integral to daily communication, they have also unfortunately become venues for harmful behaviors like harassment, bullying, and the spread of hateful rhetoric. Traditional methods for identifying and mitigating such behavior often struggle to keep pace with the sheer volume and complexity of content generated on these platforms.
 Deep learning techniques, with their ability to learn and adapt from vast amounts of unstructured data, offer a promising solution [9, 10]. By employing so0isticated algorithms and neural network architectures, these techniques can effectively analyze the nuances of language, context, and sentiment present in social media posts [11, 12]. This capability enables more accurate and comprehensive identification of aggressive behavior, going beyond mere keyword recognition to understand the subtleties of human communication, such as sarcasm, irony, and indirect speech [13]. Furthermore, the application of deep learning in this context is significant for its proactive approach to online safety. It not only aids in the immediate detection and removal of harmful content but also contributes to the larger goal of fostering healthier online environments. This can have far-reaching implications, from supporting individual mental health and well-being to promoting more respectful and constructive digital discourse [14]. By advancing these techniques, researchers and practitioners are taking critical steps toward mitigating the negative impacts of the digital age, thereby enhancing the overall quality and safety of online communication. To address this gap, our research introduces an innovative deep learning-based framework for the detection of cyber aggression. Our approach leverages a combination of novel emotional features extracted from textual data, alongside conventional Word2Vec features, to enhance the accuracy of aggression detection. The proposed deep neural network (DNN) model, characterized by its optimized architecture with a minimal number of layers, sets a new standard in both efficiency and effectiveness.
This paper delineates the following contributions:
  • Demonstrated the superior performance of the Random Forest algorithm over other conventional machine learning classifiers (LightGBM, Logistic Regression, SVM, and Naive Bayes) in the context of cyber troll detection, providing evidence for its robustness in handling both specificity and sensitivity within the dataset.
  • Revealed a unique outcome where Logistic Regression, SVM, and Naive Bayes classifiers yielded identical confusion matrices, prompting critical discussions on model validation and highlighting the necessity for meticulous experimental setup in machine learning workflows.
  • Contributed to the field of online behavior analysis by quantitatively comparing the efficacy of different machine-learning approaches, offering insights that can guide the development of more effective automated moderation tools to combat cyber trolling and enhance digital communication safety.
Following this introduction, the paper is structured as follows: Sect. 2 offers an in-depth review of existing literature on aggression detection. Section 3 elucidates the methodology and functionality of the proposed DNN algorithm. Section 4 presents the empirical findings derived from our model. Finally, Sect. 5 provides a thorough discussion of these results, alongside considerations for future research directions.

Literature review

The rapid expansion of the social web has catalyzed significant advancements in the field of Natural Language Processing (NLP), particularly in the context of analyzing and interpreting the diverse array of communications that take place on social media platforms. These platforms, including Twitter, Facebook, and various weblogs, serve as melting pots of global interaction, bringing together individuals of different languages, races, and cultural backgrounds [15]. This diversity, while enriching, also presents unique challenges, particularly in the form of cyberbullying, online aggression, and hate speech, compounded by the intricacies and complexities inherent in processing various foreign languages [16]. Researchers have used a range of terminologies to categorize and study these negative behaviors [17]. Terms such as cyberbullying, offensive language, hate speech, racism, and profanity have been extensively explored in literature. Studies have varied in their focus, with some examining the psychological profiles of cyber-aggressors versus non-aggressors, while others have utilized text, network, and user-based features for detecting aggression in social media datasets. Notably, patterns have emerged, such as bullying victims tending to write fewer posts and participate less in discussions, in contrast to aggressors who are often more active and propagate negativity online [18, 19].
The primary focus of computational linguistics has traditionally been on resource-rich languages like English, leaving resource-poor languages somewhat underexplored due to a lack of datasets and tools. Nevertheless, there have been significant efforts to detect offensive language in various languages using machine learning algorithms. These studies have applied techniques ranging from bag-of-words and basic classifiers like multinomial-naïve Bayes and logistic regression to more advanced deep learning methods. The exploration has not been limited to English, with studies extending to languages like Hindi, Marathi, Arabic, Indonesian, German, and Portuguese. In the realm of English language datasets, researchers have made notable strides in identifying cyberbullying and other forms of online aggression. Experiments have been conducted using a variety of features, including syntactic and semantic analysis, emoji usage, and sentiment lexicons. These studies have also delved into the complexities of detecting sarcasm and irony, which are particularly challenging due to their subtlety and context-dependent nature. The advent of deep learning has brought new dimensions to NLP research, proving to be more efficient in certain aspects than traditional machine learning techniques. Deep learning’s strength lies in its ability to process and learn from large sets of unstructured data, making it particularly suitable for analyzing the vast and varied content found on social media. Applications have ranged from distinguishing between hate speech and profanity to performing high-level classification of text data. Techniques like convolutional neural networks (CNN), long short-term memory (LSTM) networks, bidirectional LSTM (BiLSTM), gated recurrent units (GRU), and recurrent neural networks (RNN) have been employed to great effect [2024].
The detection of abusive behavior on online social networks has emerged as a critical area of study due to the escalating prevalence of various forms of online abuse, including offensive language, hate speech, cyberbullying, aggression, and sexual exploitation. Research efforts have been diverse, with some focusing on the identification of potential offenders in online communities, such as YouTube comment Sect. [11], while others target the detection of hate speech, with a particular emphasis on identifying racist and sexist content [25]. A notable advancement in this domain is the proposal of methodologies that incorporate user profiles, content, and network dynamics to delineate aggressive behavior on platforms like Twitter [5, 26, 27]. Machine Learning (ML)-based approaches remain at the forefront of combating online abuse. Traditional ML classifiers, including logistic regression [8, 9, 12, 27], support vector machines [28], and ensemble classifiers [29], have been extensively deployed. For example, a study on Yahoo Finance and News data applied ML methods to discern hate speech [12], while another research used an ensemble of probabilistic, rule-based, and spatial classifiers to investigate the propagation of online hate speech on Twitter [29].
In pursuit of enhanced detection efficiency, deep learning architectures have been increasingly adopted. A spectrum of deep learning models, such as Convolutional Neural Networks (CNNs), Long Short-Term Memory Networks (LSTMs), and FastText [30], have undergone evaluation for their efficacy in this domain. Furthermore, a hybrid of CNN and Gated Recurrent Unit (GRU) networks, augmented with word embeddings, has been employed for hate speech detection on Twitter [25]. The use of CNNs for the same purpose has also been reported [31]. Interestingly, a comparative study indicated that traditional machine learning methods outperformed deep neural networks, specifically Recurrent Neural Networks (RNNs), in detecting abusive and aggressive behaviors [5].
Historically, research has concentrated on “batch mode” detection of abusive behaviors, optimizing ML classifiers to identify various types of abuse within a dataset. While some methods have achieved high accuracy, they often incur significant computational costs during the training and testing phases. However, given the dynamic nature of online content, there is an imperative need for systems capable of ongoing monitoring to detect abusive behavior promptly.
To address this, an “incremental computation” approach has been proposed, which utilizes data from preceding stages to enhance the efficiency of feature extraction and classification processes [24]. Additionally, an online framework designed for real-time cyberbullying detection on Instagram employs an online feature selection technique to maintain scalability by optimizing the feature set used for classification [14]. These methods, however, concentrate on media session-level analysis rather than individual content pieces, contrasting with approaches that target aggression detection on a per-item basis, such as individual tweets. In summary, the literature reflects a growing recognition of the complexity and multifaceted nature of online aggression and the need for sophisticated, nuanced approaches to detect and mitigate it. The evolution from basic machine learning to more advanced deep learning techniques underscores the ongoing efforts to effectively analyze and understand the rich tapestry of human communication in the digital sphere.

Methodology

Figure 1 shows the proposed model used in this study.

Models

Logistic regression

Application: Logistic Regression is a widely used classification algorithm. In the context of aggression detection, it can be applied to predict whether a social media post is cyber-aggressive or non-cyber-aggressive based on features extracted from the text. The chosen settings, including L2 regularization and lbfgs solver, help mitigate overfitting and enhance model stability.

Support Vector Machine (SVM)

Application: SVM is effective for binary classification tasks. In aggression detection, SVM with the RBF kernel can capture complex relationships between features. The chosen settings, such as the RBF kernel and probability estimation, enable the model to handle non-linear decision boundaries and provide probability scores, aiding in the confidence estimation of predictions.

Naive bayes

Application: Naive Bayes is a probabilistic algorithm suitable for text classification. In aggression detection, it can model the probability of a post being cyber-aggressive or non-cyber-aggressive based on the occurrence of words. The chosen settings, including additive smoothing (alpha) and fit_prior, contribute to a robust model, particularly in dealing with sparse data.

Random forest

Application: Random Forest is an ensemble learning method known for its robustness and ability to handle complex relationships. In aggression detection, it can be used to aggregate predictions from multiple decision trees. The settings, such as the number of estimators and minimum samples for splitting, influence the model’s capacity to generalize and capture patterns effectively.

LightGBM

Application: LightGBM is a gradient-boosting framework that excels in handling large datasets. In aggression detection, it can efficiently capture complex dependencies in the data. The specified settings, including binary classification as the objective and parameters controlling tree structure (num_leaves), learning rate, and feature/bagging fractions, contribute to model efficiency and accuracy.

Dataset

The dataset used in this research is the Cyber-Troll dataset, which is publicly available on Kaggle (https://​www.​kaggle.​com/​datasets/​dataturks/​dataset-for-detection-of-cybertrolls) and was accessed on February 9, 2022. This dataset was curated by Data-Turk for aggression detection, specifically focusing on cyber-aggressive behavior in English-language tweets.
The dataset consists of a total of 20,001 tweets, each labeled into one of two classes: cyber-aggressive (CA) and non-cyber-aggressive (NCA). The labels were assigned by the Data-Turk society based on the content of the tweets. Cyber-aggressive tweets are those that contain messages intended to insult or harm someone online, while non-cyber-aggressive tweets are those that do not carry any negative meaning and are not directed toward causing harm to others.
The distribution of the dataset is as follows:
Non-cyber-aggressive (NCA) tweets: 12,179 tweets.
Cyber-aggressive (CA) tweets: 7,822 tweets.
This distribution indicates that approximately 39% of the dataset consists of cyber-aggressive tweets, while the remaining 61% comprises non-cyber-aggressive tweets. The dataset serves as a valuable resource for training and evaluating models aimed at the detection of cyber-aggressive behavior in social media contexts. The imbalanced nature of the dataset, with a higher proportion of non-cyber-aggressive tweets, should be taken into consideration when designing and evaluating models to ensure robust and accurate performance across both classes.

Parameter settings

Logistic regression

C (Inverse of regularization strength): 1.0.
Penalty: L2 regularization.
Solver: lbfgs (Limited-memory Broyden–Fletcher–Goldfarb–Shanno).
Max Iterations: 100.
Random State: 42 (for reproducibility).

Support Vector Machine (SVM)

C (Regularization parameter): 1.0.
Kernel: RBF (Radial Basis Function).
Gamma: Scale (kernel coefficient).
Degree: 3 (degree of the polynomial kernel function).
Probability: True (to enable probability estimates).
Random State: 42 (for reproducibility).

Naive bayes

Alpha: 1.0 (Additive smoothing parameter).
Fit Prior: True (whether to learn class prior probabilities).
Random Forest:
N Estimators: 100 (Number of trees in the forest).
Max Depth: None (Maximum depth of the tree).
Min Samples Split: 2 (Minimum number of samples required to split an internal node).
Random State: 42 (for reproducibility).

LightGBM

Objective: Binary (binary classification).
Boosting Type: gbdt (Gradient Boosting Decision Tree).
Num Leaves: 31 (maximum number of leaves in one tree).
Learning Rate: 0.05 (shrinkage rate to prevent overfitting).
Feature Fraction: 0.9 (fraction of features to be used for each boosting round).
Bagging Fraction: 0.8 (fraction of data to be randomly sampled for bagging).
Bagging Freq: 5 (frequency for bagging).
Metric: Binary Logloss (logarithmic loss for binary classification).
Random State: 42 (for reproducibility).
These parameter settings provide a specific configuration for each algorithm, influencing their behavior during the training and prediction phases. Adjusting these parameters allows fine-tuning of the models to achieve optimal performance on the given task or dataset. The use of a consistent random state (42) helps in obtaining reproducible results across different runs.

Performance evaluation

The evaluation metrics employed in this study encompass average accuracy, recall, precision, and F1-score. The computation of these metrics relies on the enumeration of true positive (TP), false positive (FP), true negative (TN), and false negative (FN) instances. True positives (TP) signify accurately classified cyber-aggressive tweets, while false negatives (FN) represent tweets erroneously categorized as non-cyber-aggressive. True negatives (TN) denote correctly classified non-cyber-aggressive tweets, while false positives (FP) correspond to tweets inaccurately labeled as cyber-aggressive.
Accuracy, a fundamental metric, is determined by the ratio of correctly classified cyber-aggressive and non-aggressive tweets to the total dataset. It serves as a holistic indicator of overall model performance. The computation of recall, precision, and F1-score involves specific aspects of classification outcomes.
Recall, or sensitivity, quantifies the proportion of actual cyber-aggressive tweets correctly identified by the model, emphasizing the model’s ability to capture all instances of cyber-aggression. Precision gauges the accuracy of the model in correctly identifying cyber-aggressive tweets among those it categorizes as such, minimizing false positives. F1-score, a harmonic mean of precision and recall, offers a balanced assessment of a model’s performance by considering both false positives and false negatives.
Precision measures the number of correctly identified cyber aggression tweets among all tweets labeled as cyber-aggressive.
The recall is the number of aggressive tweets among all of the tweets in the dataset.
F1-score is a measure of how well your classifier balances precision and recall.

Results

Figure 2. shows the confusion matrices for a set of machine learning classifiers, namely Random Forest, LightGBM (Light Gradient Boosting Machine), Logistic Regression, SVM (Support Vector Machine), and Naive Bayes. Confusion matrices are critical in machine learning for quantifying the performance of classification algorithms, as they provide a detailed breakdown of correct and incorrect predictions concerning actual outcomes.
The Random Forest classifier exhibits a superior predictive performance, as evidenced by the highest number of true positives (TP = 1802) and true negatives (TN = 2933), coupled with the lowest numbers of false positives (FP = 114) and false negatives (FN = 152). This suggests a robust ability to discriminate between the classes with both high sensitivity (as indicated by the high TP rate) and high specificity (as indicated by the high TN rate).
In contrast, the LightGBM classifier demonstrates a higher number of both false positives (FP = 536) and false negatives (FN = 576), indicative of a lower specificity and sensitivity respectively compared to the Random Forest classifier. The higher FP rate might suggest a tendency towards over-predicting the positive class, while the higher FN rate might indicate a conservative stance on predicting the positive class, requiring a stronger signal or evidence.
Interestingly, the confusion matrices for Logistic Regression, SVM, and Naive Bayes are identical, which may raise questions about the experimental setup or data partitioning, as it is uncommon for distinct models to yield the exact same confusion matrix on non-trivial tasks. Nevertheless, taken at face value, these classifiers have balanced false positive and false negative rates (FP = FN = 557 and 707 respectively), but they are outperformed by the Random Forest classifier in all aspects of the confusion matrix.
Figure 3 shows the comparison of performance metrics for four different machine learning models applied to a dataset for the detection of cyber trolls, as per the link you mentioned. The models evaluated are Logistic Regression, SVM (Support Vector Machine), Naive Bayes, and Random Forest.
Each model is evaluated on four different metrics:

Accuracy

This metric shows how often the model is correct when predicting whether a post is aggressive or not.

Precision

This indicates the proportion of posts that the model correctly identified as aggressive out of all the posts it labeled as aggressive.

Recall

This tells us what proportion of actual aggressive posts were correctly identified by the model.

F1 score

This is the harmonic mean of precision and recall, providing a single score that balances the two other metrics.
From the graph, we can see the performance of each model on these metrics without referring to the colors:
The Random Forest model has the highest bars across all four metrics, suggesting it has the best overall performance for detecting aggression in posts in the dataset.
The SVM model appears to perform second best, with bars slightly lower than Random Forest in all metrics.
The Logistic Regression model has lower metrics in comparison to SVM and Random Forest, particularly noticeable in one of the metrics where it has the lowest bar among all models, indicating a weaker performance in that area.
The Naive Bayes model shows a mixed performance with one metric having a notably lower bar compared to the other models, suggesting it might be less reliable in that aspect of aggression detection.
The exact performance numbers for each metric are not visible in the chart, but the relative heights of the bars provide a visual comparison of the model performances. The graph helps to assess which model might be the most effective for implementing a cyber troll detection system, considering the balance between false positives, false negatives, and correctly identified instances. Based on this visual representation, the Random Forest model would likely be the first choice for further validation and potential deployment.
Figure 4 displays Receiver Operating Characteristic (ROC) curves for five different machine learning models: Random Forest, LightGBM (Light Gradient Boosting Machine), Logistic Regression, SVM (Support Vector Machine), and Naive Bayes. The ROC curve is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. It is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.
The area under the ROC curve (AUC) is a measure of the model’s ability to distinguish between the classes and is generally considered as one of the most important evaluation metrics for checking any classification model’s performance. A model with an AUC closer to 1 indicates better performance, whereas an AUC closer to 0.5 suggests no discriminative ability better than random chance.
From the provided image, we can infer the following about the performance of the models:

Random forest

The ROC curve is almost a 45-degree line, which is indicative of a model with no classification capability (AUC ≈ 0.49). This suggests that the Random Forest model is not performing well in distinguishing between the positive and negative classes for this specific task.

LightGBM

The curve hugs the top left corner, indicating a high true positive rate and a low false positive rate, which is desirable in a good classifier. The AUC is very high (AUC ≈ 0.95), showing excellent performance.

Logistic regression

The ROC curve shows a moderate performance with an AUC of around 0.73, suggesting it has a reasonable ability to distinguish between the classes, although not as effectively as LightGBM.

SVM

The ROC curve for the SVM is very close to the top left corner, similar to LightGBM, indicating a very high AUC (AUC ≈ 0.96), which means the SVM has an excellent discrimination capacity for the given classification task.

Naive bayes

This model’s ROC curve is above the line of no-discrimination, with an AUC of about 0.85, suggesting it has a good performance, although not as strong as LightGBM or SVM.
In summary, based on the ROC curves, SVM and LightGBM are the top-performing models for this particular classification problem, followed by Naive Bayes and Logistic Regression, with Random Forest performing poorly. It is important to note that these curves are useful for visualizing and comparing the performance of different models but should be complemented with other metrics and analyses to fully understand model performance in practical applications.
Deep learning finds diverse applications across the reviewed studies, showcasing its versatility and significance in various domains. In Yu et al.‘s research (2021), deep learning can be applied for anomaly detection to enhance security in touch screen devices, helping identify and prevent indirect eavesdropping attacks [28]. In the field of LiDAR data processing, as presented by Zhou et al. (2021), deep learning can be leveraged for efficient signal decomposition, contributing to improved LiDAR data analysis and interpretation [29]. Qi et al.‘s work (2022) on brightness correction offers opportunities for deep learning-based image enhancement and quality improvement, particularly in multi-region nonuniform scenarios [30]. Cao et al. (2021) propose reliable communication in wireless-powered NOMA systems, where deep learning can optimize resource allocation and enhance system performance [31].
Furthermore, Wu et al.‘s study (2022) on dynamic spectrum allocation in cognitive radio networks suggests that deep learning can optimize pricing policies and resource allocation, improving spectrum utilization efficiency [32]. Li et al. (2022) introduce smartphone app usage analysis, where deep learning can be employed for behavior pattern recognition and user profiling, aiding app developers and marketers [33]. In the context of adaptive co-site interference cancellation, Jiang and Li (2022) indicate the potential of deep learning in interference mitigation and signal processing [34]. Deep learning’s applications extend to the educational domain, with Huang et al. (2021) proposing sentiment analysis and interaction level assessment using learning analytics, aiding in understanding and improving blended learning environments [35]. In spam detection, Wu et al.‘s hybrid PU-learning-based model (2020) can benefit from deep learning techniques to enhance the accuracy and efficiency of spammer detection [36].
Li et al. (2023) explore public-key authenticated encryption with keyword search, which can leverage deep learning for fast and accurate search operations in encrypted data [37]. Sun et al.‘s work (2020) on low-latency service function chaining orchestration in network function virtualization can employ deep learning for efficient decision-making and orchestration of network functions [38]. Similarly, Sun et al. (2019) and Sun et al. (2018) demonstrate cost-efficient and domain-spanning service function chain orchestration, where deep learning can optimize service placement and chaining decisions across multiple domains [39, 40]. Li et al. (2022) investigate daily activity patterns in smartphone app usage, presenting an opportunity for deep learning to identify and predict user behaviors, enhancing user experiences and app recommendations [41]. Furthermore, Liu et al. (2023) propose Sketch2Photo, which can benefit from deep learning techniques to improve the synthesis of photo-realistic images from sketches, enabling various creative applications [42]. In the context of developing multi-labeled corpora for Twitter short texts, Liu et al. (2023) illustrate how deep learning can assist in text analysis and classification [43]. Li et al. (2023) explore the computational effects of advanced deep neural networks on logical and activity learning, emphasizing the role of deep learning in enhancing cognitive skills and thinking processes [44]. Lastly, Zhang et al. (2023) present a security defense decision method for complex networks, where deep learning can be employed for anomaly detection and threat identification, contributing to network security [45].
The study’s practical implications are significant in the context of addressing cyber-trolling behaviors and enhancing online safety. Firstly, the finding that the Random Forest classifier outperformed other models in detecting cyber troll posts underscores the importance of employing ensemble methods and robust algorithms when developing automated tools for aggression detection in digital communications. Organizations and online platforms seeking to implement troll detection systems can benefit from adopting Random Forest-based approaches, as they demonstrate superior accuracy and a balanced trade-off between precision and recall, which is crucial for minimizing false positives and false negatives in identifying cyber trolls. Secondly, the observation that LightGBM tended higher false predictions suggests that while gradient boosting algorithms can be effective, careful parameter tuning and model evaluation are essential to mitigate false positives and ensure the reliability of detection systems. This insight guides practitioners in the selection and optimization of machine learning models tailored for cyber troll detection.
The anomaly identified among Logistic Regression, SVM, and Naive Bayes classifiers raises concerns about their suitability for this specific task [4648]. The practical implication here is the need for meticulous data preprocessing and feature engineering, as well as a rigorous model assessment when using these algorithms for text classification in social media contexts. Future research and development efforts should focus on understanding the reasons behind this anomaly and refining the application of these classifiers for cyber troll detection. Furthermore, the study emphasizes the importance of transparency and interpretability in machine-learning models designed for online safety. Cyber troll detection systems must not only perform effectively but also provide interpretable results, enabling human moderators and administrators to understand and act upon the model’s predictions. This underscores the need for further research into explainable AI techniques and their integration into the development of troll detection tools. Lastly, the mention of future work involving deep learning techniques hints at the potential for further advancements in cyber troll detection. Deep learning models, such as recurrent neural networks (RNNs) and transformer-based architectures, have shown promise in natural language processing tasks and may offer improved performance in this domain. The study encourages future investigations into the applicability of these advanced techniques and their ability to enhance cyber troll detection accuracy.

Conclusion

The present study has provided valuable insights into the effectiveness of various machine learning classifiers in the context of detecting cyber-trolling behaviors in digital communications. Through a rigorous evaluation of Random Forest, Light Gradient Boosting Machine (LightGBM), Logistic Regression, Support Vector Machine (SVM), and Naive Bayes classifiers on a publicly available dataset, we have uncovered practical implications for enhancing online safety. In conclusion, the Random Forest classifier emerged as the top-performing model, showcasing the highest accuracy and achieving a balanced precision-recall trade-off. This finding underscores the significance of employing ensemble methods when developing automated tools for identifying cyber trolls. However, it is essential to emphasize that while Random Forest exhibited superior performance, other classifiers like LightGBM also demonstrated efficacy, albeit with some tendency towards higher false predictions. This suggests that gradient boosting algorithms can be effective but require careful parameter tuning and model evaluation. The anomaly observed among Logistic Regression, SVM, and Naive Bayes classifiers highlights the need for cautious data preprocessing and feature engineering when applying these algorithms in the realm of cyber troll detection. Further investigation is warranted to understand the reasons behind this anomaly and to optimize the application of these classifiers.

Future work

Building on the findings of this study, several avenues for future research and development in the field of cyber troll detection can be identified:

Anomaly investigation

Further exploration into the anomaly observed among Logistic Regression, SVM, and Naive Bayes classifiers is imperative. This entails a detailed examination of data characteristics, feature extraction methods, and potential limitations in the model application process. Identifying and addressing these issues can lead to improved performance and a better understanding of the suitability of these algorithms for cyber troll detection.

Deep learning approaches

As alluded to in the study, the potential of deep learning techniques, including recurrent neural networks (RNNs) and transformer-based models, should be explored. These advanced architectures have demonstrated remarkable capabilities in natural language processing tasks and may offer enhanced performance in detecting nuanced forms of cyber trolling.

Explainable AI

Ensuring transparency and interpretability in model predictions is crucial, particularly for online safety systems. Future work should delve into the integration of explainable AI techniques to enable human moderators and administrators to comprehend and trust the model’s decisions. This is especially important in a context where action needs to be taken based on the model’s output.

Real-time implementation

Developing real-time cyber troll detection systems that can be seamlessly integrated into various online platforms and social media networks is a pressing need. Future research should focus on the scalability and efficiency of detection algorithms to handle large volumes of digital communications in real-time.

Cross-domain generalization

Investigating the generalization of the developed models across different online platforms and linguistic domains is essential. The robustness and adaptability of the models should be assessed to ensure their effectiveness in diverse online environments.
In conclusion, this study lays the foundation for further advancements in the field of cyber troll detection. Future research endeavors should address the identified anomalies, explore deep learning approaches, prioritize explainable AI, work towards real-time implementation, and assess cross-domain generalization to continue the pursuit of a safer and more inclusive digital space.

Declarations

Competing interests

The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
2.
Zurück zum Zitat Selkie EM, Kota R, Moreno M, CYBERBULLYING BEHAVIORS AMONG FEMALE, Coll Stud J (2016) Spring ;50(2):278–287 Selkie EM, Kota R, Moreno M, CYBERBULLYING BEHAVIORS AMONG FEMALE, Coll Stud J (2016) Spring ;50(2):278–287
9.
Zurück zum Zitat Nizamani AH, Chen Z, Nizamani AA, Bhatti UA (2023) Advance Brain Tumor segmentation using feature fusion methods with deep U-Net model with CNN for MRI data. J King Saud University-Computer Inform Sci 35(9):101793 Nizamani AH, Chen Z, Nizamani AA, Bhatti UA (2023) Advance Brain Tumor segmentation using feature fusion methods with deep U-Net model with CNN for MRI data. J King Saud University-Computer Inform Sci 35(9):101793
10.
Zurück zum Zitat Zhang Y, Chen J, Ma X, Wang G, Bhatti UA, Huang M (2024) Interactive medical image annotation using improved attention U-net with compound geodesic distance. Expert Syst Appl 237:121282CrossRef Zhang Y, Chen J, Ma X, Wang G, Bhatti UA, Huang M (2024) Interactive medical image annotation using improved attention U-net with compound geodesic distance. Expert Syst Appl 237:121282CrossRef
12.
Zurück zum Zitat Gaydhani A, Doma V, Kendre, Shrikant, Laxmi BB (2018) Detecting Hate Speech and Offensive Language on Twitter using Machine Learning: An N-gram and TFIDF based Approach Gaydhani A, Doma V, Kendre, Shrikant, Laxmi BB (2018) Detecting Hate Speech and Offensive Language on Twitter using Machine Learning: An N-gram and TFIDF based Approach
17.
Zurück zum Zitat Agathe Balayn J, Yang Z, Szlavik, Bozzon A (2021) Automatic Identification of Harmful, Aggressive, Abusive, and Offensive Language on the Web: A Survey of Technical Biases Informed by Psychology Literature. Trans. Soc. Comput. 4, 3, Article 11 (September 2021), 56 pages. https://doi.org/10.1145/3479158 Agathe Balayn J, Yang Z, Szlavik, Bozzon A (2021) Automatic Identification of Harmful, Aggressive, Abusive, and Offensive Language on the Web: A Survey of Technical Biases Informed by Psychology Literature. Trans. Soc. Comput. 4, 3, Article 11 (September 2021), 56 pages. https://​doi.​org/​10.​1145/​3479158
20.
Zurück zum Zitat Bhatti UA, Tang H, Wu G, Marjan S, Hussain A (2023) Deep learning with graph convolutional networks: an overview and latest applications in computational intelligence. Int J Intell Syst 2023:1–28CrossRef Bhatti UA, Tang H, Wu G, Marjan S, Hussain A (2023) Deep learning with graph convolutional networks: an overview and latest applications in computational intelligence. Int J Intell Syst 2023:1–28CrossRef
21.
Zurück zum Zitat Bhatti UA, Huang M, Neira-Molina H, Marjan S, Baryalai M, Tang H, …Bazai, S. U. (2023) MFFCG–Multi feature fusion for hyperspectral image classification using graph attention network. Expert Syst App 229:120496 Bhatti UA, Huang M, Neira-Molina H, Marjan S, Baryalai M, Tang H, …Bazai, S. U. (2023) MFFCG–Multi feature fusion for hyperspectral image classification using graph attention network. Expert Syst App 229:120496
23.
Zurück zum Zitat Le Glaz A, Haralambous Y, Kim-Dufor DH, Lenca P, Billot R, Ryan TC, Marsh J, DeVylder J, Walter M, Berrouiguet S, Lemey C (2021) Machine Learning and Natural Language Processing in Mental Health: systematic review. J Med Internet Res 23(5):e15708. https://doi.org/10.2196/15708CrossRef Le Glaz A, Haralambous Y, Kim-Dufor DH, Lenca P, Billot R, Ryan TC, Marsh J, DeVylder J, Walter M, Berrouiguet S, Lemey C (2021) Machine Learning and Natural Language Processing in Mental Health: systematic review. J Med Internet Res 23(5):e15708. https://​doi.​org/​10.​2196/​15708CrossRef
24.
Zurück zum Zitat Pennacchiotti M, Popescu A (2011) A Machine Learning Approach to Twitter User Classification. Proceedings of the International AAAI Conference on Web and Social Media Pennacchiotti M, Popescu A (2011) A Machine Learning Approach to Twitter User Classification. Proceedings of the International AAAI Conference on Web and Social Media
25.
Zurück zum Zitat Sarwar SM, Murdock V (2021) Unsupervised Domain Adaptation for Hate Speech Detection Using a Data Augmentation Approach Sarwar SM, Murdock V (2021) Unsupervised Domain Adaptation for Hate Speech Detection Using a Data Augmentation Approach
30.
35.
38.
41.
46.
Zurück zum Zitat Qasim M, Khan M, Mehmood W, Sobieczky F, Pichler M, Moser B (2022) A Comparative Analysis of Anomaly Detection Methods for Predictive Maintenance in SME. In:, et al. Database and Expert systems Applications - DEXA 2022 Workshops. DEXA 2022. Communications in Computer and Information Science, vol 1633. Springer, Cham. https://doi.org/10.1007/978-3-031-14343-4_3CrossRef Qasim M, Khan M, Mehmood W, Sobieczky F, Pichler M, Moser B (2022) A Comparative Analysis of Anomaly Detection Methods for Predictive Maintenance in SME. In:, et al. Database and Expert systems Applications - DEXA 2022 Workshops. DEXA 2022. Communications in Computer and Information Science, vol 1633. Springer, Cham. https://​doi.​org/​10.​1007/​978-3-031-14343-4_​3CrossRef
48.
Zurück zum Zitat Rafique W, Khan M, Sarwar N, Sohail M, Irshad A (2019) A Graph Theory based method to Extract Social structure in the Society. In: Bajwa I, Kamareddine F, Costa A (eds) Intelligent Technologies and Applications. INTAP 2018. Communications in Computer and Information Science, vol 932. Springer, Singapore. https://doi.org/10.1007/978-981-13-6052-7_38CrossRef Rafique W, Khan M, Sarwar N, Sohail M, Irshad A (2019) A Graph Theory based method to Extract Social structure in the Society. In: Bajwa I, Kamareddine F, Costa A (eds) Intelligent Technologies and Applications. INTAP 2018. Communications in Computer and Information Science, vol 932. Springer, Singapore. https://​doi.​org/​10.​1007/​978-981-13-6052-7_​38CrossRef
Metadaten
Titel
Innovative deep learning techniques for monitoring aggressive behavior in social media posts
verfasst von
Huimin Han
Muhammad Asif
Emad Mahrous Awwad
Nadia Sarhan
Yazeed Yasid Ghadi
Bo Xu
Publikationsdatum
01.12.2024
Verlag
Springer Berlin Heidelberg
Erschienen in
Journal of Cloud Computing / Ausgabe 1/2024
Elektronische ISSN: 2192-113X
DOI
https://doi.org/10.1186/s13677-023-00577-6

Weitere Artikel der Ausgabe 1/2024

Journal of Cloud Computing 1/2024 Zur Ausgabe