Introduction
Background and literature review
Concepts of information, misinformation and disinformation
How they differ?
-
Misinformation is false or inaccurate information, especially that which is deliberately intended to deceive.
-
Disinformation is false information that is intended to mislead, especially propaganda issued by a government organization to a rival power or the media.
-
Propaganda is defined as information, especially of a biased or misleading nature, used to promote a political cause or point of view.
Conceptual explanation of the distinguishing features
-
Disinformation is often the product of a carefully planned and technically sophisticated deceit process.
-
Disinformation may not come directly from the source that intends to deceive.
-
Disinformation is often written or verbal communication to include doctored photographs, fake videos etc.
-
Disinformation could be distributed very widely or targeted at specific people or organizations.
-
The intended targets are often a person or a group of people.
Misinformation
Countering the spread of misinformation
-
The strategies proposed in [6] for effective counter measures include:
-
Providing credible alternative explanation to the misinformation.
-
Repeated retractions to reduce the effect of misinformation without repeating the misinformation.
-
Explicit warnings before mentioning the misinformation so as to prevent the misinformation from getting reinforced.
-
Counter measures be suitably biased towards affirmation of the world view of the receiver.
-
Simple and brief retractions which are cognitively more attractive than the corresponding misinformation.
Analysis of work done so far
A generic framework for detection of spread of misinformation
Identifying cues to deception using cognitive psychology
Research design and methodology
Generic framework for detection of spread of misinformation
Credibility analysis of Twitter
Twitter as a social filter
Twitter during critical events
Spread of rumours and influence in Twitter
Orchestrated semantic attacks in Twitter
Analysis of measuring credibility of tweets
Criteria | Metrics | Authors | Accuracy | Complexity | Usefulness | Remarks |
---|---|---|---|---|---|---|
for fast | ||||||
detection | ||||||
Consistency of message | Retweets, mentions | Retweets are better than mentions | No | Yes | ||
Coherency of message | Questions, affirms, denial, no of words, pronouns, hashtags, URLs, exclamation marks, negative and positive sentiments, NLP techniques | Decision tree algorithms with a combination of various factors are accurate | Yes | Computationally intensive, requires ground truth | Content analysis required. Metrics are an indirect measure | |
Credibility of Source | Tweets, retweets, mentions, indegree, user name, image, followers, followees, age | Retweets are more accurate | No | Yes | ||
General acceptability | Retweets | Good | No | Yes |
-
Automated means of detecting tweets are accurate, but computationally intensive and manual inputs are required.
-
Retweets form a unique mechanism available in Twitter for studying information propagation and segregating misinformation.
-
Analysing the information propagation using models in Computer Science and concepts of Cognitive Psychology would provide efficient solutions for detecting and countering the spread of misinformation.
Methods
Data sets
-
Egypt. Heavy political unrest and massive protests spread in Egypt during the months of Aug-Sep 2013. The news related to the these events were captured using the keyword `egypt’ for the period from 13 Aug 2013 to 23 Sep 2013.
-
Syria. The use of chemical weapons in Syria in the month of Aug 2013 attracted worldwide criticism. The reflection in Twitter of the events was tracked using the keyword `syria’ for the period from 25 Aug 2013 to 21 Sep 2013.
Data set | #Tweets | #Retweets | #Senders | #Re-tweeters | Period |
---|---|---|---|---|---|
Egypt | 141682 | 51723 | 10850 | 27532 | 13 Aug 2013 to 23 Sep 2013 |
Syria | 104867 | 44708 | 11452 | 25415 | 25 Aug 2013 to 21 Sep 2013 |
Methodology
Step 1: Consider only the retweets
Step 2: Evaluate the source of retweets
Step 3: Construct a retweet graph
Step 3: Evaluate the general acceptability of the tweet
Step 4: Content analysis of the finally filtered items
Proposed framework for speedy detection of misinformation in Twitter
Summary of the steps involved
-
Identify the original source of information (tweets) in the network.
-
Evolve a methodology to rate the credibility of the source based on the acceptance of the tweets by the receivers.
-
Construct a retweet graph to evaluate and measure the `misinformation content’ of a tweet and determine its credibility by the level of its acceptance by all the affected users using Gini coefficient.
-
Segregate the possible sources of misinformation as non credible users and the corresponding tweets.
-
Evaluate the general acceptance of tweets from credible users using PageRank algorithm in the retweet graph.
-
Present the credibility of the source and the general acceptance of the tweet to the user to help him evaluate the information contents of the tweet.