2009 | OriginalPaper | Buchkapitel
Human Activity Recognition Using the 4D Spatiotemporal Shape Context Descriptor
verfasst von : Natasha Kholgade, Andreas Savakis
Erschienen in: Advances in Visual Computing
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
In this paper, a four-dimensional spatiotemporal shape context descriptor is introduced and used for human activity recognition in video. The spatiotemporal shape context is computed on silhouette points by binning the magnitude and direction of motion at every point with respect to given vertex, in addition to the binning of radial displacement and angular offset associated with the standard 2D shape context. Human activity recognition at each video frame is performed by matching the spatiotemporal shape context to a library of known activities via k-nearest neighbor classification. Activity recognition in a video sequence is based on majority classification of the video frame results. Experiments on the Weizmann set of ten activities indicate that the proposed shape context achieves better recognition of activities than the original 2D shape context, with overall recognition rates of 90% obtained for individual frames and 97.9% for video sequences.