Reference Hub32
DMMs-Based Multiple Features Fusion for Human Action Recognition

DMMs-Based Multiple Features Fusion for Human Action Recognition

Mohammad Farhad Bulbul, Yunsheng Jiang, Jinwen Ma
Copyright: © 2015 |Volume: 6 |Issue: 4 |Pages: 17
ISSN: 1947-8534|EISSN: 1947-8542|EISBN13: 9781466677166|DOI: 10.4018/IJMDEM.2015100102
Cite Article Cite Article

MLA

Bulbul, Mohammad Farhad, et al. "DMMs-Based Multiple Features Fusion for Human Action Recognition." IJMDEM vol.6, no.4 2015: pp.23-39. http://doi.org/10.4018/IJMDEM.2015100102

APA

Bulbul, M. F., Jiang, Y., & Ma, J. (2015). DMMs-Based Multiple Features Fusion for Human Action Recognition. International Journal of Multimedia Data Engineering and Management (IJMDEM), 6(4), 23-39. http://doi.org/10.4018/IJMDEM.2015100102

Chicago

Bulbul, Mohammad Farhad, Yunsheng Jiang, and Jinwen Ma. "DMMs-Based Multiple Features Fusion for Human Action Recognition," International Journal of Multimedia Data Engineering and Management (IJMDEM) 6, no.4: 23-39. http://doi.org/10.4018/IJMDEM.2015100102

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

The emerging cost-effective depth sensors have facilitated the action recognition task significantly. In this paper, the authors address the action recognition problem using depth video sequences combining three discriminative features. More specifically, the authors generate three Depth Motion Maps (DMMs) over the entire video sequence corresponding to the front, side, and top projection views. Contourlet-based Histogram of Oriented Gradients (CT-HOG), Local Binary Patterns (LBP), and Edge Oriented Histograms (EOH) are then computed from the DMMs. To merge these features, the authors consider decision-level fusion, where a soft decision-fusion rule, Logarithmic Opinion Pool (LOGP), is used to combine the classification outcomes from multiple classifiers each with an individual set of features. Experimental results on two datasets reveal that the fusion scheme achieves superior action recognition performance over the situations when using each feature individually.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.