2012 | OriginalPaper | Chapter
Multi-sensored Vision for Autonomous Production of Personalized Video Summaries
Authors : Fan Chen, Damien Delannay, Christophe De Vleeschouwer
Published in: User Centric Media
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Democratic and personalized production of multimedia content is a challenge for content providers. In this paper, members of the FP7 APIDIS consortium explain how it is possible to address this challenge by building on computer vision tools to automate the collection and distribution of audiovisual content. In a typical application scenario, a network of cameras covers the scene of interest, and distributed analysis and interpretation of the scene are exploited to decide what to show or not to show about the event, so as to edit a video from of a valuable subset of the streams provided by each individual camera. Generation of personalized summaries through automatic organization of stories is also considered. In final, the proposed technology provides practical solutions to a wide range of applications, such as personalized access to local sport events through a web portal, cost-effective and fully automated production of content for small-audience, or automatic log in of annotations.