Skip to main content

Über dieses Buch

This book focuses on novel implementations of sensor technologies, artificial intelligence, machine learning, computer vision and statistics for automated, human fall recognition systems and related topics using data fusion.
It includes theory and coding implementations to help readers quickly grasp the concepts and to highlight the applicability of this technology. For convenience, it is divided into two parts. The first part reviews the state of the art in human fall and activity recognition systems, while the second part describes a public dataset especially curated for multimodal fall detection. It also gathers contributions demonstrating the use of this dataset and showing examples.
This book is useful for anyone who is interested in fall detection systems, as well as for those interested in solving challenging, signal recognition, vision and machine learning problems. Potential applications include health care, robotics, sports, human–machine interaction, among others.



Challenges and Solutions on Human Fall Detection and Classification


Open Source Implementation for Fall Classification and Fall Detection Systems

Distributed social coding has created many benefits for software developers. Open source code and publicly available datasets can leverage the development of fall detection and fall classification systems. These systems can help to improve the time in which a person receives help after a fall occurs. Many of the simulated falls datasets consider different types of fall however, very few fall detection systems actually identify and discriminate between each category of falls. In this chapter, we present an open source implementation for fall classification and detection systems using the public UP-Fall Detection dataset. This implementation comprises a set of open codes stored in a GitHub repository for full access and provides a tutorial for using the codes and a concise example for their application.
Hiram Ponce, Lourdes Martínez-Villaseñor, José Núñez-Martínez, Ernesto Moya-Albor, Jorge Brieva

Detecting Human Activities Based on a Multimodal Sensor Data Set Using a Bidirectional Long Short-Term Memory Model: A Case Study

Human falls are one of the leading causes of fatal unintentional injuries worldwide. Falls result in a direct financial cost to health systems, and indirectly, to society’s productivity. Unsurprisingly, human fall detection and prevention is a major focus of health research. In this chapter, we present and evaluate several bidirectional long short-term memory (Bi-LSTM) models using a data set provided by the Challenge UP competition. The main goal of this study is to detect 12 human daily activities (six daily human activities, five falls, and one post-fall activity) derived from multi-modal data sources - wearable sensors, ambient sensors, and vision devices. Our proposed Bi-LSTM model leverages data from accelerometer and gyroscope sensors located at the ankle, right pocket, belt, and neck of the subject. We utilize a grid search technique to evaluate variations of the Bi-LSTM model and identify a configuration that presents the best results. The best Bi-LSTM model achieved good results for precision and f1-score, 43.30 and 38.50%, respectively.
Silvano Ramos de Assis Neto, Guto Leoni Santos, Elisson da Silva Rocha, Malika Bendechache, Pierangelo Rosati, Theo Lynn, Patricia Takako Endo

Intelligent Real-Time Multimodal Fall Detection in Fog Infrastructure Using Ensemble Learning

As the data and computing power increases with surge in sensors and IoT devices to making things smart, the research trend is moving in the direction to help the elderly as a part of the smart home infrastructure. The elderly people of age over 60 years are at a high risk level due to adverse effects when fall occurs. There has been various researches in the field of automatic fall detection with the help of sensors, video surveillance, wearable devices etc. The detection and analysis has to be in near real-time to handle the fall efficiently. Decision making with the help of data from multiple sources tends to be more effective than decision from a single source. Hence, the use of multimodal fall detection has been in research which takes advantage of the data from multiple sources for optimal detection. Our proposed methodology helps in fall detection and relevant decision making in near real-time by reducing the processing latency. The low latency decision making is realized using the intermediate mist and fog layer for data filtering and immediate processing. The video data is analyzed at the edge and the fall is detected using edge processing. The use of the multimodal processing and detection increases the accuracy of prediction. Thus along with the video data, the data from the wearable sensors are also considered for detection. The detection is done using ensemble learning and compressed Deep neural networks for detection and image processing respectively. The blend of the fog infrastructure and the proposed algorithm helps improve the accuracy and reduce detection and decision making time.
V. Divya, R. Leena Sri

Wearable Sensors Data-Fusion and Machine-Learning Method for Fall Detection and Activity Recognition

Human falls are common source of injury among the elderly, because often the elderly person is injured and cannot call for help. In the literature this is addressed by various fall-detection systems, of which most common are the ones that use wearable sensors. This paper describes the winning method developed for the Challenge Up: Multimodal Fall Detection competition. It is a multi-sensor data-fusion machine-learning method that recognizes human activities and falls using 5 wearable inertial sensors: accelerometers and gyroscopes. The method was evaluated on a dataset collected by 12 subjects of which 3 were used as a test-data for the challenge. In order to optimally adapt the method to the 3 test subjects, we performed an unsupervised similarity search—that finds the three most similar users to the three users in the test data. This helped us to tune the method and its parameters to the 3 most similar users as the ones used for the test. The internal evaluation on the 9 users showed that with this optimized configuration the method achieves 98% accuracy. During the final evaluation for the challenge, our method achieved the highest results (82.5% F1-score, and 98% accuracy) and won the competition.
Hristijan Gjoreski, Simon Stankoski, Ivana Kiprijanovska, Anastasija Nikolovska, Natasha Mladenovska, Marija Trajanoska, Bojana Velichkovska, Martin Gjoreski, Mitja Luštrek, Matjaž Gams

Application of Convolutional Neural Networks for Fall Detection Using Multiple Cameras

Currently one of the most important research issue for artificial intelligence and computer vision tasks is the recognition of human falls. Due to the current exponential increase in the use of cameras is it common to use vision-based approach for fall detection and classification systems. On another hand deep learning algorithms have transformed the way that we see vision-based problems. The Convolutional Neural Network (CNN) as deep learning technique offers more reliable and robust solutions on detection and classification problems. Focusing only on a vision-based approach, for this work we used images from a new public multimodal data set for fall detection (UP-Fall Detection dataset) published by our research team. In this chapter we present fall detection system using a 2D CNN analyzing multiple camera information. This method analyzes images in fixed time window frames extracting features using an optical flow method that obtains information of relative motion between two consecutive images. For experimental results, we tested this approach in UP-Fall Detection dataset. Results showed that our proposed multi-vision-based approach detects human falls achieving 95.64% in accuracy with a simple CNN network architecture compared with other state-of-the-art methods.
Ricardo Espinosa, Hiram Ponce, Sebastián Gutiérrez, Lourdes Martínez-Villaseñor, Jorge Brieva, Ernesto Moya-Albor

Approaching Fall Classification Using the UP-Fall Detection Dataset: Analysis and Results from an International Competition

This chapter presents the results of the Challenge UP – Multimodal Fall Detection competition that was held during the 2019 International Joint Conference on Neural Networks (IJCNN 2019). This competition lies on the fall classification problem, and it aims to classify eleven human activities (i.e. five types of falls and six simple daily activities) using the joint information from different wearables, ambient sensors and video recordings, stored in a given dataset. After five months of competition, three winners and one honorific mention were awarded during the conference event. The machine learning model from the first place scored \(82.47\%\) in \(F_1\)-score, outperforming the baseline of \(70.44\%\). After analyzing the implementations from the participants, we summarized the insights and trends of fall classification.
Hiram Ponce, Lourdes Martínez-Villaseñor

Reviews and Trends on Multimodal Healthcare


Classification of Daily Life Activities for Human Fall Detection: A Systematic Review of the Techniques and Approaches

Human fall detection systems are an important part of assistive technology, since daily living assistance are very often required for many people in today’s aging population. Human fall detection systems play an important role in our daily life, because falls are the main obstacle for elderly people to live independently and it is also a major health concern due to aging population. There has been several researches conducted using variety of sensors to develop systems to accurately classify unintentional human fall from other activities of daily life. The three basic approaches used to develop human fall detection systems include some sort of wearable devices, ambient based devices or non-invasive vision based devices using live cameras. This study reviewed the techniques and approaches employed to device systems to detect unintentional falls and classified them based on the approaches employed and sensors used.
Yoosuf Nizam, M. Mahadi Abdul Jamil

An Interpretable Machine Learning Model for Human Fall Detection Systems Using Hybrid Intelligent Models

This chapter presents an assessment of falls and everyday situations in people by sensors dataset collected in fall simulation. This evaluation was performed through the use of intelligent techniques and models based on feature selection techniques and fuzzy neural networks. Therefore, this work can be seen as an auxiliary approach of presenting a vision of knowledge extraction for the construction of actions, prevention, and training to functional that will work in areas correlated to health impacts of people who may have difficulties or injuries due to the impact suffered in a fall. The results obtained were compared with state of the art for the theme and the version of the hybrid model that acts on the most relevant dataset dimensions identifying falls obtained results that surpassed the other models submitted to the test. They were successful in extracting various information from a highly sophisticated and incredibly dimensional dataset to help professionals from various areas expand their investigations in the field of falling people.
Paulo Vitor C. Souza, Augusto J. Guimaraes, Vanessa S. Araujo, Lucas O. Batista, Thiago S. Rezende

Multi-sensor System, Gamification, and Artificial Intelligence for Benefit Elderly People

Over the years, the elderly people population will become more than children population; besides since 2018 people over 65 years old outnumbered the population under 5 years old. Growing up involves biological, physical, social and psychological changes that may lead to social isolation and loss of loved ones, or even to the sense of loss of value, purpose or confidence. Moreover, as people are aging, they usually spend more time at home. As a solution, social inclusion through mobile devices and smart home seem to be ideal to avoid that lack of purpose in life, confidence or value. Smart homes collect and analyze data from household appliances and devices to promote independence, prevent emergencies and increase the quality of life in elderly people. In that regard, the multi-sensor system allows the expert to know more about the elderly people needs to propose actions that improve the elderly people’s quality of life, as they can read and analyze through sensors their facial expressions, voice, among others. Gamification in older people may motivate elderly people to socialize with their peers through social interaction and by doing activities as exercising. Thus, this chapter proposes to use an adaptive neural network fuzzy inference to evaluate camera and voice devices that come from the multi-sensor system to propose an interactive and tailored human-machine interface for the elderly people in the home.
Juana Isabel Méndez, Omar Mata, Pedro Ponce, Alan Meier, Therese Peffer, Arturo Molina

A Novel Approach for Human Fall Detection and Fall Risk Assessment

There are several studies concerned in providing quality health care services to all people. Among them, human fall detection systems play an important role, because fall is the main obstacle for elderly people to live independently and it is also a major health concern due to aging population. The three basic approaches used to develop fall detection systems include some sort of wearable, ambient or non-invasive based devices. Most of such systems are very often rejected by users due to the high false alarm and difficulties in carrying them during their daily life activities. Thus, this study proposes a non-invasive fall detection system based on the height, velocity, statistical analysis, fall risk factors and position of the subject from depth information. The proposed algorithm also utilizes fall risk factors of the user, during fall detection process to classify the subject with chances of fall such as high fall risk or low fall risk level. Thus, making the system adaptable to the physical condition of the user. The proposed system also performs a fall injury assessment after the fall event to alert for appropriate assistance and includes a fall risk assessment tool which can work independently to predict falls or to classify people with their fall risk levels. From the experimental results, the proposed system was able to achieve an average accuracy of 98.3% with sensitivity of 100% and specificity of 97.7%.
Yoosuf Nizam, M. Mahadi Abdul Jamil
Weitere Informationen