Complex Networks & Their Applications X
Volume 2, Proceedings of the Tenth International Conference on Complex Networks and Their Applications COMPLEX NETWORKS 2021
- 2022
- Book
- Editors
- Rosa Maria Benito
- Chantal Cherifi
- Hocine Cherifi
- Esteban Moro
- Luis M. Rocha
- Marta Sales-Pardo
- Book Series
- Studies in Computational Intelligence
- Publisher
- Springer International Publishing
About this book
This book highlights cutting-edge research in the field of network science, offering scientists, researchers, students, and practitioners a unique update on the latest advances in theory and a multitude of applications. It presents the peer-reviewed proceedings of the X International Conference on Complex Networks and their Applications (COMPLEX NETWORKS 2021). The carefully selected papers cover a wide range of theoretical topics such as network models and measures; community structure, network dynamics; diffusion, epidemics and spreading processes; resilience and control as well as all the main network applications, including social and political networks; networks in finance and economics; biological and neuroscience networks, and technological networks.
Table of Contents
-
Frontmatter
-
Information Spreading in Social Media
-
Frontmatter
-
Hate Speech Detection on Social Media Using Graph Convolutional Networks
Seema Nagar, Sameer Gupta, C. S. Bahushruth, Ferdous Ahmed Barbhuiya, Kuntal DeyThe chapter delves into the pressing issue of hate speech on Twitter and the significance of detecting such content to prevent real-life violence. It discusses the evolution of hate speech detection methods, from manually crafted textual features to deep learning-based automatic feature selection. The authors argue that combining user features and social network structure with textual features is crucial for accurate detection. The proposed framework uses Graph Convolutional Networks (GCNs) to learn unified user features, capturing social network structure, language usage, and meta-data. The Variational Graph Auto-Encoder (VGAE) is employed to encode these features, which are then combined with textual features for hate speech classification. The framework is empirically validated on two diverse datasets, demonstrating superior performance compared to state-of-the-art baselines. The chapter also explores the impact of tweet length and graph centrality metrics on classification accuracy, providing valuable insights into the factors influencing hate speech detection.AI Generated
This summary of the content was generated with the help of AI.
AbstractDetection of hateful content on Twitter has become the need of the hour. Hate detection is challenging primarily due to the subjective definition of “hateful”. In the absence of context, text content alone is often not sufficient to detect hate. In this paper, we propose a framework that combines content with context, to detect hate. The framework takes into account (a) textual features of the content and (b) unified features of the author to detect hateful content. We use a Variational Graph Auto-encoder (VGAE) to jointly learn the unified features of authors using a social network, content produced by the authors, and their profile information. To accommodate emerging and future language models, we develop a flexible framework that incorporates any text encoder as a plug-in to obtain the textual features of the content. We empirically demonstrate the performance and utility of the framework on two diverse datasets from Twitter. -
Capturing the Spread of Hate on Twitter Using Spreading Activation Models
Seema Nagar, Sameer Gupta, Ferdous Ahmed Barbhuiya, Kuntal DeyThe chapter delves into the critical issue of hate speech on Twitter and its potential to incite real-life violence. Traditional methods for detecting hateful users are limited, often treating hatefulness as a static attribute. This work introduces a spreading activation model to dynamically capture the spread of hate, considering both single and multiple forms of hate. By employing the spreading activation (SPA) algorithm and modifying the TopSPA model, the authors effectively represent the nuanced dynamics of hate speech dissemination. The study utilizes an ensemble-based classification model to identify hateful tweets and users, and experiments on a large Twitter dataset to validate the proposed methods. The results demonstrate the superior performance of the SPA and TopSPA models in predicting hateful users and capturing the spread of different hate forms. This comprehensive approach sheds light on the complex nature of hate speech propagation on social media platforms.AI Generated
This summary of the content was generated with the help of AI.
AbstractHate speech is a prevalent and pervasive phenomenon on social media platforms. Detecting hate speech and modelling its spread is a significant research problem with practical implications. Though hate speech detection has been an active area of research, hate spread modelling is still in its nascent stage. Prior works have analyzed the hateful users’ social network embedding, hateful user detection using belief propagation models, and the spread velocity of hateful content. However, these prior works fail to factor in the multiple hateful forms (such as hate against gender, race and ethnicity) and the temporal evolution of hate spread, limiting their applicability. We take a holistic approach wherein we model the spread of hate as a single form and fine-granular spread of hateful forms. We extend the traditional spread and activation (SPA) model to capture the spread of hate and its forms. We use SPA to model, the spread of hate as one single form while TopSPA captures the spread of multiple hate forms. We also propose ways to detect hateful forms by using latent topics present in hateful content. We empirically demonstrate our approach to a dataset from Twitter that contains ample hate speech instances along with users labelled as hateful or not. -
Indirect Causal Influence of a Single Bot on Opinion Dynamics Through a Simple Recommendation Algorithm
Niccolo Pescetelli, Daniel Barkoczi, Manuel CebrianThis chapter investigates the subtle yet pervasive influence of bots on public opinion through the mediation of recommendation algorithms. By simulating a network of agents, the study demonstrates how a single bot can significantly shift the mean opinion of a population, even in the absence of direct interactions. The research highlights the indirect causal pathway between bots and human agents, emphasizing the role of recommender systems in amplifying bot influence. This work contributes to the ongoing debate on social media regulation and the potential manipulation of public opinion by automated agents.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe ability of social and political bots to influence public opinion is often difficult to estimate. Recent studies found that hyper-partisan accounts often directly interact with already highly polarised users on Twitter and are unlikely to influence the general population’s average opinion. In this study, we suggest that social bots, trolls and zealots may influence people’s views not only via direct interactions (e.g. retweets, at-mentions and likes) but also via indirect causal pathways mediated by platforms’ content recommendation systems. Using a simple agent-based opinion-dynamics simulation, we isolate the effect of a single bot – representing only 1% of the population – on the average opinion of Bayesian agents when we remove all direct connections between the bot and human agents. We compare this experimental condition with an identical baseline condition where such a bot is absent. We used the same random seed in both simulations so that all other conditions remained identical. Results show that, even in the absence of direct connections, the mere presence of the bot is sufficient to shift the average population opinion. Furthermore, we observe that the presence of the bot significantly affects the opinion of almost all agents in the population. Overall, these findings offer a proof of concept that bots and hyperpartisan accounts can influence average population opinions not only by directly interacting with human accounts but also by shifting platforms’ recommendation engines’ internal representations. -
Modelling the Effects of Self-learning and Social Influence on the Diversity of Knowledge
Tuan PhamThe chapter 'Modelling the Effects of Self-learning and Social Influence on the Diversity of Knowledge' investigates how individuals acquire and diversify their knowledge through self-learning and social interactions. It introduces a model that simulates knowledge acquisition within networks, considering both static and dynamic processes. The study compares the effects of self-learning, where individuals discover new topics related to their existing knowledge, and social influence, where topics are learned through social connections. The results show that self-learning tends to increase the overall diversity of knowledge in the population, while social influence benefits individual diversity. The chapter also explores the impact of network modularity and initialization strategies on knowledge diversity. By examining different metrics, such as topic entropy and robustness, the study provides insights into how different learning strategies affect the distribution and retention of knowledge within networks. The findings have implications for understanding the balance between specialization and generalization in knowledge acquisition processes.AI Generated
This summary of the content was generated with the help of AI.
AbstractThis paper presents a computational model of acquiring new knowledge through self-learning (e.g. a Wikipedia “rabbit hole”) or social influence (e.g. recommendations through friends). This is set up in a bipartite network between a static social network (agents) and a static knowledge network (topics). For simplicity, the learning process is singly parameterized by \(\alpha \) as the probability of self-learning, leaving \(1-\alpha \) as the socially-influenced discovery probability. Numerical simulations show a tradeoff of \(\alpha \) on the diversity of knowledge when examined at the population level (e.g. number of distinct topics) and at the individual level (e.g. the average distance between topics for an agent), consistent across different intralayer configurations. In particular, higher values of \(\alpha \), or increased self-learning tendency, lead to higher population diversity and robustness. However, lower values of \(\alpha \), where learning/discovery is more easily influenced by social inputs, expand individual knowledge diversity more readily. These numerical results might provide some basic insights into how social influences can affect the diversity of human knowledge, especially in the age of information and social media. -
Activator-Inhibitor Model for Describing Interactions Between Fake News and Their Corrections
Masaki Aida, Ayako HashizumeThe chapter delves into the intricate relationship between fake news and their corrections, using an activator-inhibitor model to describe how corrections can inadvertently amplify the spread of fake news. It begins by discussing the widespread impact of fake news on social media and the challenges of effectively countering it. The activator-inhibitor model is introduced as a novel approach to understand these dynamics, showing how corrections can create clusters of users strongly influenced by fake news even in unbiased network structures. Numerical experiments demonstrate the model's ability to replicate real-world phenomena, offering valuable insights for developing more effective strategies against fake news.AI Generated
This summary of the content was generated with the help of AI.
AbstractThe problem of fake news continues to worsen in today’s online social networks. Intuitively, it seems effective to send corrections as a countermeasure. However, there in actuality corrections can, ironically, strengthen attention to fake news, which worsens the situation. In this paper, we model the interaction between fake news and the corrections as a reaction-diffusion system and propose a framework that describes the mechanism that can make corrections increase attention to fake news. In this framework, the emergence of groups of users who believe in fake news is understood as a Turing pattern that appears in the activator-inhibitor model. Numerical calculations show that even if the network structure has no spatial bias, the interaction between fake news and the corrections creates a group that is strongly interested in discussing fake news. -
Love and Hate During Political Campaigns in Social Networks
Juan Carlos Losada, José Manuel Robles, Rosa María Benito, Rafael CaballeroThe chapter delves into the analysis of sentiment polarization during political campaigns on social networks, with a focus on the 2016 USA elections on Twitter. It introduces a methodological framework for studying the level of polarization by examining user opinions towards candidates. The authors propose a novel approach to visualize and quantify the distribution of sentiments using love-hate diagrams, which offer insights into the relationship between supporters and detractors of competing candidates. The case study reveals intricate dynamics of public opinion, showing how sentiments towards one candidate do not necessarily translate into support for the other. This research highlights the importance of considering the interplay between sentiments towards multiple candidates to gain a deeper understanding of political debates on social media platforms.AI Generated
This summary of the content was generated with the help of AI.
AbstractSocial networks have become central for public debate. However, this debate occurs, in many cases, in a polarized and socially fragmented way. The importance of a specific type of polarization marked by the expression of emotions (affective polarization) has recently been pointed out. This work seeks to analyze affective polarization through the study of opinions shared by users of Twitter during competitive events such as a political election (United States electoral campaign, 2016). To operationalize the affective polarization we propose, first, to consider the relationship between the user opinions about each contender. This approach allows us to describe what is the opinion of the supporters of one contender about the rest of them. Secondly, we combine sentiment analysis techniques, new diagrams describing graphically the tension between the positive and negative opinions, and numerical measurements obtained employing techniques borrowed from physics such as the gravity centers to identify and measure affective dispositions of users participating in this discussion. In summary, a way of measuring an emergent and central type of polarization in competitive contexts is proposed here. -
Homophily - a Driving Factor for Hate Speech on Twitter
Seema Nagar, Sameer Gupta, C. S. Bahushruth, Ferdous Ahmed Barbhuiya, Kuntal DeyThe chapter delves into the phenomenon of homophily, the tendency of like-minded individuals to connect, and its impact on the spread of hate speech on Twitter. It introduces a novel method for computing familiarity using graph embeddings, which captures the position of users in the social network more effectively than existing metrics. The study empirically demonstrates that homophily plays a crucial role in the generation and dissemination of hate speech. Moreover, it examines variations in homophily across different forms of hate, revealing stronger homophilic behavior among users associated with certain hateful forms, such as racism and xenophobia. The chapter concludes by emphasizing the importance of understanding homophily in enhancing research on hate speech propagation.AI Generated
This summary of the content was generated with the help of AI.
AbstractIn this paper, we hypothesize that homophily is a powerful driver in the generation of hate speech on Twitter. Prior works have shown the role of homophily in information diffusion, sustenance of online guilds and contagion in product adoption. We observe that familiarity computation techniques used in the literature such as the presence of an edge, and the number of mutual friends between a pair of users, have ample scope of improvement. We address the limitations of existing familiarity computation methods by employing a variational-graph-auto-encoder to capture a user’s position in the network and utilizing this representation to compute familiarity. We empirically demonstrate the presence of homophily on a dataset from Twitter. Further, we investigate how homophily varies with different hateful forms, such as hate manifesting in topics of gender, race, colour etc. Our results indicate higher homophily in users associating with topics of racism, sexism and nationalism. -
Influencing the Influencers: Evaluating Person-to-Person Influence on Social Networks Using Granger Causality
Richard Kuzma, Iain J. Cruickshank, Kathleen M. CarleyThe chapter delves into the analysis of social influence on Twitter, specifically focusing on the impact of key accounts (Alters) on former U.S. President Donald Trump's tweets. By employing Granger causality, the study measures the predictive power of Alters' tweets on Trump's, revealing the time delays and topics most affected. The methodology involves gathering tweets, identifying topics using machine learning, and constructing time series to test for causality. The results highlight significant variations in influence, with some Alters having immediate effects while others show delayed impacts. This work not only contributes to understanding misinformation spread but also opens avenues for marketing and influence analysis in social media.AI Generated
This summary of the content was generated with the help of AI.
AbstractWe introduce a novel method for analyzing person-to-person content influence on Twitter. Using an Ego-Alter framework and Granger Causality, we examine President Donald Trump (the Ego) and the people he retweets (Alters) as a case study. We find that each Alter has a different scope of influence across multiple topics, different magnitude of influence on a given topic, and the magnitude of a single Alter’s influence can vary across topics. This work is novel in its focus on person-to-person influence and content-based influence. Its impact is two-fold: (1) identifying “canaries in the coal mine” who could be observed by misinformation researchers or platforms to identify misinformation narratives before super-influencers spread them to large audiences, and (2) enabling digital marketing targeted toward upstream Alters of super-influencers. -
How the Far-Right Polarises Twitter: ‘Hashjacking’ as a Disinformation Strategy in Times of COVID-19
Philipp Darius, Fabian StephanyThe chapter delves into the strategic use of 'hashjacking' by far-right actors, particularly the German party AfD, to polarize Twitter debates during the COVID-19 pandemic. It examines how these actors hijack hashtags to influence public opinion and leverage their content. The study compares polarization strategies between 2018 and 2020, revealing a stable and high level of polarization around AfD hashtags. Additionally, it highlights the significant activity of far-right partisans in debates related to COVID-19 hashtags, driven by a small set of very active users. The analysis underscores the importance of understanding these dynamics for both users and platform providers in combating the spread of misinformation.AI Generated
This summary of the content was generated with the help of AI.
AbstractTwitter influences political debates. Phenomena like fake news and hate speech show that political discourses on social platforms can become strongly polarised by algorithmic enforcement of selective perception. Some political actors actively employ strategies to facilitate polarisation on Twitter, as past contributions show, via strategies of ‘hashjacking’(The use of someone else’s hashtag in order to promote one’s own social media agenda.). For the example of COVID-19 related hashtags and their retweet networks, we examine the case of partisan accounts of the German far-right party Alternative für Deutschland (AfD) and their potential use of ‘hashjacking’ in May 2020. Our findings indicate that polarisation of political party hashtags has not changed significantly in the last two years. We see that right-wing partisans are actively and effectively polarising the discourse by ‘hashjacking’ COVID-19 related hashtags, like #CoronaVirusDE or #FlattenTheCurve. This polarisation strategy is dominated by the activity of a limited set of heavy users. The results underline the necessity to understand the dynamics of discourse polarisation, as an active political communication strategy of the far-right, by only a handful of very active accounts. -
Models of Influence Spreading on Social Networks
Vesa Kuikka, Minh An Antti PhamThe chapter delves into the intricate models of influence spreading on social networks, categorizing them into simple and complex contagion processes. It introduces analytical and simulation methods to implement these models, highlighting the flexibility and computational requirements of each approach. The study also explores the empirical application of these models on well-known social networks, such as Zachary's Karate Club and a Facebook network, showcasing the distinct outcomes and insights derived from different models. The chapter concludes by emphasizing the importance of network structures in influencing spreading processes and the need for further research to accurately model various social interactions.AI Generated
This summary of the content was generated with the help of AI.
AbstractComplex contagion (CC) and simple contagion (SC) models have been used to describe influence and information spreading on social networks. We investigate four different models, two from each category, and discuss the interplay between the characteristics of social interactions and different variants of the models. Different properties of social networks are discovered by calculating centrality measures with the proposed spreading models. We demonstrate the results with the Zachary’s Karate Club social network and a larger Facebook social media network. -
Bubble Effect Induced by Recommendation Systems in a Simple Social Media Model
Franco Bagnoli, Guido de Bonfioli Cavalcabo, Benedetto Casu, Andrea GuazziniThe chapter delves into the consequences of recommendation systems in social media, particularly the 'bubble effect' and the creation of artificial communities. By employing a simple social media model, the authors simulate user interactions under the influence of a recommendation system. They explore how these systems can shape user opinions and form communities that may not reflect genuine affinities. The study highlights the potential for recommendation systems to create echo chambers and intellectual isolation, even from initial random interactions. The findings offer valuable insights into the dynamics of social media and the impact of recommendation algorithms on user behavior and community formation.AI Generated
This summary of the content was generated with the help of AI.
AbstractWe investigate the influence of a recommendation system on the formation of communities in a simulated social media system.When the recommendation system is initialized with a limited number of random posts, and there is strong selection of correlated users, and therefore communities, not present when examining the internal factors of participating people, arise .This happens even when people are immutable, but are exposed only to messages coming from people that are correlated to them, according to past messages. When people are let free to evolve, reducing their cognitive dissonance, true isolated communities appear, causing the filter bubble effect. -
Maximum Entropy Networks Applied on Twitter Disinformation Datasets
Bart De Clerck, Filip Van Utterbeeck, Julien Petit, Ben Lauwens, Wim Mees, Luis E. C. RochaThe chapter delves into the use of maximum entropy networks to analyze disinformation spread on Twitter, focusing on datasets from the Twitter information operations report. It introduces the concept of disinformation and misinformation, emphasizing their impact on democratic elections and public health crises like COVID-19. The study applies a maximum entropy network model to identify statistically significant interactions between users, validating the technique against existing datasets. The method is shown to effectively reduce noise in interaction networks, leading to more accurate community detection. The chapter also discusses the challenges of repeatability in social media research due to the dynamic nature of platforms like Twitter. Despite limitations, the study underscores the value of the Twitter information operations report datasets in understanding disinformation operations. The work concludes by suggesting future directions, including the integration of content analysis and the need for consistent data sharing practices to advance research in this field.AI Generated
This summary of the content was generated with the help of AI.
AbstractIdentifying and detecting disinformation is a major challenge. Twitter provides datasets of disinformation campaigns through their information operations report. We compare the results of community detection using a classical network representation with a maximum entropy network model. We conclude that the latter method is useful to identify the most significant interactions in the disinformation network over multiple datasets. We also apply the method to a disinformation dataset related to COVID-19, which allows us to assess the repeatability of studies on disinformation datasets. -
Community Deception in Networks: Where We Are and Where We Should Go
Valeria Fionda, Giuseppe PirròThe chapter delves into the intricate world of community deception in networks, where the goal is to design algorithms that can hide community structures to evade detection by community detection algorithms. It begins by introducing the concept of community detection and the need for community deception to protect user privacy. The chapter then systematically reviews existing community deception techniques, categorizing them into modularity-based, safeness-based, and permanence-based approaches. Each technique is evaluated on a variety of real-world networks to assess its effectiveness in hiding communities. The chapter also highlights the need for future research in areas such as attribute-based networks and embedding-based community detection. Additionally, it provides a Python library containing state-of-the-art community deception techniques, making it a valuable resource for practitioners and researchers alike.AI Generated
This summary of the content was generated with the help of AI.
AbstractCommunity deception tackles the following problem: given a target community \(\mathcal {C}\) inside a network \(G\) and a budget of updates \(\beta \) (e.g., edge removal and additions), what is the best way (i.e., optimization of some function \(\phi _{G}({\mathcal {C}})\)) to perform such updates in a way that \(\mathcal {C}\) can escape to a detector D (i.e., a community detection algorithm)? This paper aims at: (i) presenting an analysis of the state-of-the-art deception techniques; (ii) evaluating state-of-the-art deception techniques: (iii) making available a library of techniques to practitioners and researchers. -
Mitigating the Backfire Effect Using Pacing and Leading
Qi Yang, Khizar Qureshi, Tauhid ZamanThe chapter delves into the challenges of persuasion in online social networks, particularly the backfire effect and echo-chambers. It introduces a field experiment conducted on Twitter to test two persuasion methods: pacing and leading, and social contact. Pacing and leading involves the arguer initially agreeing with the audience's opinion before gradually shifting to the target position. Social contact is established through liking the audience's posts. The experiment reveals that combining pacing and leading with social contact is most effective in mitigating the backfire effect, especially in the moderation phase. This approach offers promising strategies to overcome the obstacles posed by echo-chambers and the backfire effect in online persuasion.AI Generated
This summary of the content was generated with the help of AI.
AbstractOnline social networks create echo-chambers where people are infrequently exposed to opposing opinions. Even if such exposure occurs, the persuasive effect may be minimal or nonexistent. Recent studies have shown that exposure to opposing opinions causes a backfire effect, where people become more steadfast in their original beliefs. We conducted a longitudinal field experiment on Twitter to test methods that mitigate the backfire effect while exposing people to opposing opinions. Our subjects were Twitter users with anti-immigration sentiment. The backfire effect was defined as an increase in the usage frequency of extreme anti-immigration language in the subjects’ posts. We used automated Twitter accounts, or bots, to apply different treatments to the subjects. One bot posted only pro-immigration content, which we refer to as arguing. Another bot initially posted anti-immigration content, then gradually posted more pro-immigration content, which we refer to as pacing and leading. We also applied a contact treatment in conjunction with the messaging based methods, where the bots liked the subjects’ posts. We found that the most effective treatment was a combination of pacing and leading with contact. The least effective treatment was arguing with contact. In fact, arguing with contact consistently showed a backfire effect relative to a control group. These findings have many limitations, but they still have important implications for the study of political polarization, the backfire effect, and persuasion in online social networks. -
Exploring Bias and Information Bubbles in YouTube’s Video Recommendation Networks
Baris Kirdemir, Nitin AgarwalThe chapter delves into the detection, understanding, and mitigation of bias in YouTube’s video recommendation networks. It explores how these biases can lead to echo chambers, polarization, and information bubbles, impacting societal discourse. The study uses network analysis and stochastic methods to uncover the structural properties of these recommendation networks, highlighting the influence of certain channels and the formation of content communities. The findings reveal a long-tail distribution of recommended videos, with a few channels dominating the recommendations. The analysis also demonstrates how these recommendations can create closely-knit clusters, reinforcing extremist and polarizing content. The chapter concludes by emphasizing the need for further research and the potential of network analysis in auditing and mitigating biased recommendations.AI Generated
This summary of the content was generated with the help of AI.
AbstractThis study audits the structural and emergent properties of YouTube’s video recommendations, with an emphasis on the black-box evaluation of recommender bias, confinement, and formation of information bubbles in the form of content communities. Adopting complex networks and graphical probabilistic approaches, the results of our analysis of 6,068,057 video recommendations made by the YouTube algorithm reveals strong indicators of recommendation bias leading to the formation of closely-knit and confined content communities. Also, our qualitative and quantitative exploration of the prominent content and discourse in each community further uncovered the formation of narrative-specific clusters made by the recommender system we examined.
-
- Title
- Complex Networks & Their Applications X
- Editors
-
Rosa Maria Benito
Chantal Cherifi
Hocine Cherifi
Esteban Moro
Luis M. Rocha
Marta Sales-Pardo
- Copyright Year
- 2022
- Publisher
- Springer International Publishing
- Electronic ISBN
- 978-3-030-93413-2
- Print ISBN
- 978-3-030-93412-5
- DOI
- https://doi.org/10.1007/978-3-030-93413-2
Accessibility information for this book is coming soon. We're working to make it available as quickly as possible. Thank you for your patience.