2015 | OriginalPaper | Buchkapitel
Spatial-Temporal Feature Fusion for Human Fall Detection
verfasst von : Xin Ma, Haibo Wang, Bingxia Xue, Yibin Li
Erschienen in: Computer Vision
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
When suddenly falling to the ground, elderly people can get seriously injured. This paper presents a vision-based fall detection approach by using a low-cost depth camera. The approach is based on a novel combination of three feature types: curvature scale space (CSS), morphological, and temporal features. CSS and morphological features capture different properties of human silhouette during the falling procedure. All the two collected feature vectors are clustered to generate occurrence histogram as fall representations. Meanwhile, the trajectory of a skeleton point that depicts the temporal property of fall action is used as a complimentary representation. For each individual feature, ELM classifier is trained separately for fall prediction. Finally, their prediction scores are fused together to decide whether fall happens or not. For evaluating the approach, we built a depth dataset by capturing 6 daily actions (falling, bending, sitting, squatting, walking, and lying) from 20 subjects. Extensive experiments show that the proposed approach achieves an average 85.89% fall detection accuracy, which apparently outperforms using each feature type individually.