2008 | OriginalPaper | Buchkapitel
A Novel Video Classification Method Based on Hybrid Generative/Discriminative Models
verfasst von : Zhi Zeng, Wei Liang, Heping Li, Shuwu Zhang
Erschienen in: Structural, Syntactic, and Statistical Pattern Recognition
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
We consider the problem of automatically classifying videos into predefined categories based on the analysis of their audio contents. In detail, given a set of labeled videos (such as news, sitcoms, sports, etc.), our objective is to classify a new video into one of these categories. To solve this problem, a novel audio features based video classification method combining an unsupervised generative model named probabilistic Latent Semantic Analysis (pLSA) with a multi-class discriminative classifier is proposed. Since general audio signals usually show complicated distribution in the feature space, k-means clustering method is firstly used to group temporal signal segments with similar low-level features into natural clusters, which are adopted as “audio words”. Then, the audio stream of a video is decomposed into a bag of “audio words”. To classify those bags of “audio words” which extracted from videos, latent “topics” are discovered by pLSA, and subsequently, training a multi-class classifier on the “topic” distribution vector for each video. Encouraging classification results have been achieved in our experiments.