Video Verification in the Fake News Era
- 2019
- Buch
- Herausgegeben von
- Vasileios Mezaris
- Assist. Prof. Lyndon Nixon
- Symeon Papadopoulos
- Denis Teyssou
- Verlag
- Springer International Publishing
Über dieses Buch
This book presents the latest technological advances and practical tools for discovering, verifying and visualizing social media video content, and managing related rights. The digital media revolution is bringing breaking news to online video platforms, and news organizations often rely on user-generated recordings of new and developing events shared in social media to illustrate the story. However, in video, there is also deception. In today's "fake news" era, access to increasingly sophisticated editing and content management tools and the ease with which fake information spreads in electronic networks, require the entire news and media industries to carefully verify third-party content before publishing it. As such, this book is of interest to computer scientists and researchers, news and media professionals, as well as policymakers and data-savvy media consumers.
Inhaltsverzeichnis
-
Frontmatter
-
Problem Statement
-
Frontmatter
-
Chapter 1. Video Verification: Motivation and Requirements
Denis Teyssou, Jochen SpangenbergAbstractThe production and spreading of manipulated videos have been on the rise over the past years, and is expected to continue and increase further. Manipulating videos have become easier from a technological perspective, and can be done with freely available tools that require less expert knowledge and fewer resources than in the past. All this poses new challenges for those who aim to tackle the spreading of false, manipulated or misleading video content. This chapter covers many of the aspects raised above. It deals with the motivations of those involved in video verification, showcases respective requirements and highlights the importance and relevance of tackling disinformation on social networks. Furthermore, an overview of the state of the art of available techniques and technologies is provided. The chapter then describes the emergence of new threats like so-called ‘deep fakes’ created with the help of artificial intelligence. Finally, we formulate an empirical typology of false videos spreading online.
-
-
Technologies
-
Frontmatter
-
Chapter 2. Real-Time Story Detection and Video Retrieval from Social Media Streams
Lyndon Nixon, Daniel Fischl, Arno ScharlAbstractThis chapter introduces two key tools for journalists. Before being able to initiate the process of verification of an online video, they need to be able to determine the news story that is the subject of online video, and they need to be able to find candidate online videos around that story. To do this, we have assessed prior research in the area of topic detection and developed a keyword graph-based method for news story discovery out of Twitter streams. Then we have developed a technique for selection of online videos which are candidates for news stories by using the detected stories to form a query against social networks. This enables relevant information retrieval at Web scale for news story-associated videos. We present these techniques and results of their evaluations by observation of the detected stories and of the news videos which are presented for those stories, demonstrating state-of-the-art precision and recall for journalists to quickly identify videos for verification and re-use. -
Chapter 3. Video Fragmentation and Reverse Search on the Web
Evlampios Apostolidis, Konstantinos Apostolidis, Ioannis Patras, Vasileios MezarisAbstractThis chapter is focused on methods and tools for video fragmentation and reverse search on the web. These technologies can assist journalists when they are dealing with fake news—which nowadays are being rapidly spread via social media platforms—that rely on the reuse of a previously posted video from a past event with the intention to mislead the viewers about a contemporary event. The fragmentation of a video into visually and temporally coherent parts and the extraction of a representative keyframe for each defined fragment enables the provision of a complete and concise keyframe-based summary of the video. Contrary to straightforward approaches that sample video frames with a constant step, the generated summary through video fragmentation and keyframe extraction is considerably more effective for discovering the video content and performing a fragment-level search for the video on the web. This chapter starts by explaining the nature and characteristics of this type of reuse-based fake news in its introductory part, and continues with an overview of existing approaches for temporal fragmentation of single-shot videos into sub-shots (the most appropriate level of temporal granularity when dealing with user-generated videos) and tools for performing reverse search of a video on the web. Subsequently, it describes two state-of-the-art methods for video sub-shot fragmentation—one relying on the assessment of the visual coherence over sequences of frames, and another one that is based on the identification of camera activity during the video recording—and presents the InVID web application that enables the fine-grained (at the fragment-level) reverse search for near-duplicates of a given video on the web. In the sequel, the chapter reports the findings of a series of experimental evaluations regarding the efficiency of the above-mentioned technologies, which indicate their competence to generate a concise and complete keyframe-based summary of the video content, and the use of this fragment-level representation for fine-grained reverse video search on the web. Finally, it draws conclusions about the effectiveness of the presented technologies and outlines our future plans for further advancing them. -
Chapter 4. Finding Near-Duplicate Videos in Large-Scale Collections
Giorgos Kordopatis-Zilos, Symeon Papadopoulos, Ioannis Patras, Ioannis KompatsiarisAbstractThis chapter discusses the problem of Near-Duplicate Video Retrieval (NDVR). The main objective of a typical NDVR approach is: given a query video, retrieve all near-duplicate videos in a video repository and rank them based on their similarity to the query. Several approaches have been introduced in the literature, which can be roughly classified in three categories based on the level of video matching, i.e., (i) video-level, (ii) frame-level, and (iii) filter-and-refine matching. Two methods based on video-level matching are presented in this chapter. The first is an unsupervised scheme that relies on a modified Bag-of-Words (BoW) video representation. The second is a s upervised method based on Deep Metric Learning (DML). For the development of both methods, features are extracted from the intermediate layers of Convolutional Neural Networks and leveraged as frame descriptors, since they offer a compact and informative image representation, and lead to increased system efficiency. Extensive evaluation has been conducted on publicly available benchmark datasets, and the presented methods are compared with state-of-the-art approaches, achieving the best results in all evaluation setups. -
Chapter 5. Finding Semantically Related Videos in Closed Collections
Foteini Markatopoulou, Markos Zampoglou, Evlampios Apostolidis, Symeon Papadopoulos, Vasileios Mezaris, Ioannis Patras, Ioannis KompatsiarisAbstractModern newsroom tools offer advanced functionality for automatic and semi-automatic content collection from the web and social media sources to accompany news stories. However, the content collected in this way often tends to be unstructured and may include irrelevant items. An important step in the verification process is to organize this content, both with respect to what it shows, and with respect to its origin. This chapter presents our efforts in this direction, which resulted in two components. One aims to detect semantic concepts in video shots, to help annotation and organization of content collections. We implement a system based on deep learning, featuring a number of advances and adaptations of existing algorithms to increase performance for the task. The other component aims to detect logos in videos in order to identify their provenance. We present our progress from a keypoint-based detection system to a system based on deep learning. -
Chapter 6. Detecting Manipulations in Video
Grégoire Mercier, Foteini Markatopoulou, Roger Cozien, Markos Zampoglou, Evlampios Apostolidis, Alexandros I. Metsai, Symeon Papadopoulos, Vasileios Mezaris, Ioannis Patras, Ioannis KompatsiarisAbstractThis chapter presents the techniques researched and developed within InVID for the forensic analysis of videos, and the detection and localization of forgeries within User-Generated Videos (UGVs). Following an overview of state-of-the-art video tampering detection techniques, we observed that the bulk of current research is mainly dedicated to frame-based tampering analysis or encoding-based inconsistency characterization. We built upon this existing research, by designing forensics filters aimed to highlight any traces left behind by video tampering, with a focus on identifying disruptions in the temporal aspects of a video. As for many other data analysis domains, deep neural networks show very promising results in tampering detection as well. Thus, following the development of a number of analysis filters aimed to help human users in highlighting inconsistencies in video content, we proceeded to develop a deep learning approach aimed to analyze the outputs of these forensics filters and automatically detect tampered videos. In this chapter, we present our survey of the state of the art with respect to its relevance to the goals of InVID, the forensics filters we developed and their potential role in localizing video forgeries, as well as our deep learning approach for automatic tampering detection. We present experimental results on benchmark and real-world data, and analyze the results. We observe that the proposed method yields promising results compared to the state of the art, especially with respect to the algorithm’s ability to generalize to unknown data taken from the real world. We conclude with the research directions that our work in InVID has opened for the future. -
Chapter 7. Verification of Web Videos Through Analysis of Their Online Context
Olga Papadopoulou, Markos Zampoglou, Symeon Papadopoulos, Ioannis KompatsiarisAbstractThis chapter discusses the problem of analysing the online ‘context’ of User-Generated Videos (UGVs) with the goal of extracting clues that help analysts with the video verification process. As video context, we refer to information surrounding the video, i.e. information about the video itself, user comments below the video, information about the video publisher and any dissemination of the video through other video platforms or social media. As a starting point, we present the Fake Video Corpus, a dataset of debunked and verified UGVs that aims at serving as reference for qualitative and quantitative analysis and evaluation. Next, we present a web-based service, called Context Aggregation and Analysis, which supports the collection, filtering and mining of contextual pieces of information that can serve as verification signals. This service aims to assist Internet users in their video verification efforts. -
Chapter 8. Copyright Management of User-Generated Video for Journalistic Reuse
Roberto García, Maria Teixidor, Paloma de Barrón, Denis Teyssou, Rosa Gil, Albert Berga, Gerard RoviraAbstractTo review the copyright scope of the reuse for journalistic purposes of User-Generated Videos, usually found in social media, the starting point is the analysis of current practices in the news industry. Based on this analysis, we provide a set of recommendations for social media reuse under copyright law and social networks’ terms of use. Moreover, we describe how these recommendations have been used to guide the development of the InVID Rights Management module, focusing on EU copyright law given the context of the project and the involved partners.
-
-
Applications
-
Frontmatter
-
Chapter 9. Applying Design Thinking Methodology: The InVID Verification Plugin
Denis TeyssouAbstractThis chapter describes the methodology used to develop and release a browser extension which has become one of the major tools to debunk disinformation and verify videos and images, in a period of less than 18 months. It has attracted more than 12,000 users from media newsrooms, fact-checkers, the media literacy community, human rights defenders, and emergency response workers dealing with false rumors and content. -
Chapter 10. Multimodal Analytics Dashboard for Story Detection and Visualization
Arno Scharl, Alexander Hubmann-Haidvogel, Max Göbel, Tobi Schäfer, Daniel Fischl, Lyndon NixonAbstractThe InVID Multimodal Analytics Dashboard is a visual content exploration and retrieval system to analyze user-generated video content from social media platforms including YouTube, Twitter, Facebook, Reddit, Vimeo, and Dailymotion. It uses automated knowledge extraction methods to analyze each of the collected postings and stores the extracted metadata for later analyses. The real-time synchronization mechanisms of the dashboard help to track information flows within the resulting information space. Cluster analysis is used to group related postings and detect evolving stories, to be analyzed along multiple semantic dimensions such as sentiment and geographic location. Data journalists can not only visualize the latest trends across communication channels, but also identify opinion leaders (persons or organizations) as well as the relations among these opinion leaders. -
Chapter 11. Video Verification in the Newsroom
Rolf Fricke, Jan ThomsenAbstractThis chapter describes the integration of a video verification process into newsrooms of TV broadcasters or news agencies, which enables journalists to analyze and assess user-generated videos (UGV) from platforms such as YouTube, Facebook, or Twitter. We regard the organizational integration concerning the workflow, responsibility, and preparations as well as the inclusion of innovative verification tools and services into an existing IT environment. This includes the technical prerequisites required to connect the newsroom to video verification services in the cloud with the combined employment of third-party Web services for retrieval, analysis, or geolocation. We describe the different features to verify source, time, place, content, and rights of the video offered for journalists by the InVID Video Verification Application orVerification App for short, which can serve as a blueprint for realizing a video verification process for professional newsroom systems. In the outlook, we discuss further potential to improve the current verification process through additional services, such as speech-to-text, OCR, translation, or deep fake detection.
-
-
Concluding Remarks
-
Frontmatter
-
Chapter 12. Disinformation: The Force of Falsity
Denis TeyssouAbstractThis final chapter borrows the concept of force of falsity from the famous Italian semiotician and novelist Umberto Eco to describe how manipulated information remains visible and accessible despite efforts to debunk it. In particular, search engine indexes are getting confused by disinformation and they too often fail to retrieve the authentic piece of content, the one which is neither manipulated nor decontextualized.
-
-
Backmatter
- Titel
- Video Verification in the Fake News Era
- Herausgegeben von
-
Vasileios Mezaris
Assist. Prof. Lyndon Nixon
Symeon Papadopoulos
Denis Teyssou
- Copyright-Jahr
- 2019
- Electronic ISBN
- 978-3-030-26752-0
- Print ISBN
- 978-3-030-26751-3
- DOI
- https://doi.org/10.1007/978-3-030-26752-0
Informationen zur Barrierefreiheit für dieses Buch folgen in Kürze. Wir arbeiten daran, sie so schnell wie möglich verfügbar zu machen. Vielen Dank für Ihre Geduld.