Reference Hub46
A Machine Learning Method with Threshold Based Parallel Feature Fusion and Feature Selection for Automated Gait Recognition

A Machine Learning Method with Threshold Based Parallel Feature Fusion and Feature Selection for Automated Gait Recognition

Muhammad Sharif, Muhammad Attique, Muhammad Zeeshan Tahir, Mussarat Yasmim, Tanzila Saba, Urcun John Tanik
Copyright: © 2020 |Volume: 32 |Issue: 2 |Pages: 26
ISSN: 1546-2234|EISSN: 1546-5012|EISBN13: 9781522583691|DOI: 10.4018/JOEUC.2020040104
Cite Article Cite Article

MLA

Sharif, Muhammad, et al. "A Machine Learning Method with Threshold Based Parallel Feature Fusion and Feature Selection for Automated Gait Recognition." JOEUC vol.32, no.2 2020: pp.67-92. http://doi.org/10.4018/JOEUC.2020040104

APA

Sharif, M., Attique, M., Tahir, M. Z., Yasmim, M., Saba, T., & Tanik, U. J. (2020). A Machine Learning Method with Threshold Based Parallel Feature Fusion and Feature Selection for Automated Gait Recognition. Journal of Organizational and End User Computing (JOEUC), 32(2), 67-92. http://doi.org/10.4018/JOEUC.2020040104

Chicago

Sharif, Muhammad, et al. "A Machine Learning Method with Threshold Based Parallel Feature Fusion and Feature Selection for Automated Gait Recognition," Journal of Organizational and End User Computing (JOEUC) 32, no.2: 67-92. http://doi.org/10.4018/JOEUC.2020040104

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Gait is a vital biometric process for human identification in the domain of machine learning. In this article, a new method is implemented for human gait recognition based on accurate segmentation and multi-level features extraction. Four major steps are performed including: a) enhancement of motion region in frame by the implementation of linear transformation with HSI color space; b) Region of Interest (ROI) detection based on parallel implementation of optical flow and background subtraction; c) shape and geometric features extraction and parallel fusion; d) Multi-class support vector machine (MSVM) utilization for recognition. The presented approach reduces error rate and increases the CCR. Extensive experiments are done on three data sets namely CASIA-A, CASIA-B and CASIA-C which present different variations in clothing and carrying conditions. The proposed method achieved maximum recognition results of 98.6% on CASIA-A, 93.5% on CASIA-B and 97.3% on CASIA-C, respectively.