Skip to main content
Erschienen in: Neural Computing and Applications 2/2023

04.10.2022 | Original Article

Deep-learning-based human activity recognition for Alzheimer’s patients’ daily life activities assistance

verfasst von: Ahmed Snoun, Tahani Bouchrika, Olfa Jemai

Erschienen in: Neural Computing and Applications | Ausgabe 2/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Alzheimer’s disease is considered as one of the most well-known illnesses in the elderly. It is a neurodegenerative and irreversible brain disorder that slowly destroys memory, thinking ability, and ultimately the ability to perform even basic daily tasks. In fact, people suffering from this disorder have difficulty remembering events, recognizing objects and faces, remembering the meaning of words, and developing judgment. As a result, their cognitive abilities are impaired and they are unable to perform activities of daily living independently. Therefore, patients need constant support to carry out their daily activities. In this study, we propose a new support system to support patients with Alzheimer’s disease to carry out their daily tasks independently. The proposed assistance systems are composed of two parts. The first is a human activity recognition (HAR) module to monitor the patient behaviour. Here, we proposed two HAR systems. The first is based on 2D skeleton data and convolution neural network, and the second is based on 3D skeleton and transformers. The second part of the assistance systems consists of a support module that recognizes the patient’s behavioural abnormalities and issues appropriate warnings. Here, we also proposed two methods. The first is based on a simple conditional structure, and the second is based on a reinforcement learning technique. As a result, we obtain four different assistance systems for Alzheimer’s patients. Finally, a comparative study between the four systems was carried out in terms of performance and time complexity using the DemCare dataset.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Division UP (2019) World population ageing, 2019: highlights, p 37 Division UP (2019) World population ageing, 2019: highlights, p 37
13.
19.
Zurück zum Zitat Papandreou G, Zhu T, Chen L-C, Gidaris S, Tompson J, Murphy K (2018) Personlab: person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer vision—ECCV 2018. Springer, Cham, pp 282–299CrossRef Papandreou G, Zhu T, Chen L-C, Gidaris S, Tompson J, Murphy K (2018) Personlab: person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer vision—ECCV 2018. Springer, Cham, pp 282–299CrossRef
21.
Zurück zum Zitat Gan L, Chen F-T (2013) Human action recognition using APJ3D and random forests. J Softw 8:2238–2245CrossRef Gan L, Chen F-T (2013) Human action recognition using APJ3D and random forests. J Softw 8:2238–2245CrossRef
22.
Zurück zum Zitat Wang J, Liu Z, Wu Y, Yuan J (2012) Mining actionlet ensemble for action recognition with depth cameras. In: 2012 IEEE conference on computer vision and pattern recognition, pp 1290–1297 Wang J, Liu Z, Wu Y, Yuan J (2012) Mining actionlet ensemble for action recognition with depth cameras. In: 2012 IEEE conference on computer vision and pattern recognition, pp 1290–1297
23.
Zurück zum Zitat Xia L, Chen C-C, Aggarwal JK (2012) View invariant human action recognition using histograms of 3D joints. In: 2012 IEEE computer society conference on computer vision and pattern recognition workshops, pp 20–27 Xia L, Chen C-C, Aggarwal JK (2012) View invariant human action recognition using histograms of 3D joints. In: 2012 IEEE computer society conference on computer vision and pattern recognition workshops, pp 20–27
26.
Zurück zum Zitat Taha A, Zayed HH, Khalifa ME, El-Horbaty E-SM (2015) Human activity recognition for surveillance applications. In: ICIT 2015, pp 577–586 Taha A, Zayed HH, Khalifa ME, El-Horbaty E-SM (2015) Human activity recognition for surveillance applications. In: ICIT 2015, pp 577–586
27.
Zurück zum Zitat Gaglio S, Re GL, Morana M (2015) Human activity recognition process using 3-d posture data. IEEE Trans Hum Mach Syst 45:586–597CrossRef Gaglio S, Re GL, Morana M (2015) Human activity recognition process using 3-d posture data. IEEE Trans Hum Mach Syst 45:586–597CrossRef
28.
Zurück zum Zitat Zhu Y, Lan Z, Newsam S, Hauptmann A (2019) Hidden two-stream convolutional networks for action recognition. In: Jawahar CV, Li H, Mori G, Schindler K (eds) Computer vision—ACCV 2018. Springer, Cham, pp 363–378CrossRef Zhu Y, Lan Z, Newsam S, Hauptmann A (2019) Hidden two-stream convolutional networks for action recognition. In: Jawahar CV, Li H, Mori G, Schindler K (eds) Computer vision—ACCV 2018. Springer, Cham, pp 363–378CrossRef
30.
Zurück zum Zitat Taylor GW, Fergus R, LeCun Y, Bregler C (2010) Convolutional Learning of spatio-temporal features. In: Proceedings of the 11th European conference on computer vision: Part VI. ECCV’10, pp 140–153. Springer, Berlin, Heidelberg. https://doi.org/10.5555/1888212.1888225 Taylor GW, Fergus R, LeCun Y, Bregler C (2010) Convolutional Learning of spatio-temporal features. In: Proceedings of the 11th European conference on computer vision: Part VI. ECCV’10, pp 140–153. Springer, Berlin, Heidelberg. https://​doi.​org/​10.​5555/​1888212.​1888225
33.
Zurück zum Zitat Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. CoRR arXiv:1704.04861 Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. CoRR arXiv:​1704.​04861
35.
Zurück zum Zitat Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Bengio Y, LeCun Y (eds) ICLR. arXiv:1409.1556 Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: Bengio Y, LeCun Y (eds) ICLR. arXiv:​1409.​1556
36.
Zurück zum Zitat Du Y, Wang W, Wang L (2015) Hierarchical recurrent neural network for skeleton based action recognition. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), pp 1110–1118 Du Y, Wang W, Wang L (2015) Hierarchical recurrent neural network for skeleton based action recognition. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), pp 1110–1118
37.
Zurück zum Zitat Zhu Y, Chen W, Guo G (2013) Fusing spatiotemporal features and joints for 3D action recognition. In: 2013 IEEE conference on computer vision and pattern recognition workshops, pp 486–491 Zhu Y, Chen W, Guo G (2013) Fusing spatiotemporal features and joints for 3D action recognition. In: 2013 IEEE conference on computer vision and pattern recognition workshops, pp 486–491
38.
Zurück zum Zitat Liu J, Shahroudy A, Xu D, Kot AC, Wang G (2018) Skeleton-based action recognition using spatio-temporal LSTM network with trust gates. IEEE Trans Pattern Anal Mach Intell 40:3007–3021CrossRef Liu J, Shahroudy A, Xu D, Kot AC, Wang G (2018) Skeleton-based action recognition using spatio-temporal LSTM network with trust gates. IEEE Trans Pattern Anal Mach Intell 40:3007–3021CrossRef
39.
Zurück zum Zitat Noori FM, Wallace B, Uddin MZ, Torresen J (2019) A robust human activity recognition approach using openpose, motion features, and deep recurrent neural network. In: Felsberg M, Forssén P-E, Sintorn I-M, Unger J (eds) Image Anal. Springer, Cham, pp 299–310CrossRef Noori FM, Wallace B, Uddin MZ, Torresen J (2019) A robust human activity recognition approach using openpose, motion features, and deep recurrent neural network. In: Felsberg M, Forssén P-E, Sintorn I-M, Unger J (eds) Image Anal. Springer, Cham, pp 299–310CrossRef
40.
Zurück zum Zitat Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Lu, Polosukhin I (2017) Attention is all you need. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R (eds) Advances in neural information processing systems, vol 30, pp. 1–11 Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Lu, Polosukhin I (2017) Attention is all you need. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R (eds) Advances in neural information processing systems, vol 30, pp. 1–11
41.
Zurück zum Zitat Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M., Heigold G, Gelly S, Uszkoreit J, Houlsby N (2020) An image is worth 16x16 words: transformers for image recognition at scale. CoRR arXiv:2010.11929 Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M., Heigold G, Gelly S, Uszkoreit J, Houlsby N (2020) An image is worth 16x16 words: transformers for image recognition at scale. CoRR arXiv:​2010.​11929
42.
Zurück zum Zitat Lin C-H, Yumer E, Wang O, Shechtman E, Lucey S (2018) St-gan: spatial transformer generative adversarial networks for image compositing. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, pp 9455–9464 Lin C-H, Yumer E, Wang O, Shechtman E, Lucey S (2018) St-gan: spatial transformer generative adversarial networks for image compositing. In: 2018 IEEE/CVF conference on computer vision and pattern recognition, pp 9455–9464
44.
Zurück zum Zitat Jean-Baptiste E, Mihailidis A (2017) Benefits of automatic human action recognition in an assistive system for people with dementia. In: 2017 IEEE Canada international humanitarian technology conference (IHTC), pp 61–65 Jean-Baptiste E, Mihailidis A (2017) Benefits of automatic human action recognition in an assistive system for people with dementia. In: 2017 IEEE Canada international humanitarian technology conference (IHTC), pp 61–65
47.
Zurück zum Zitat Chen H, Soh Y (2018) A cooking assistance system for patients with Alzheimers disease using reinforcement learning. Int J Inf Technol 23(2), pp. 1–11 Chen H, Soh Y (2018) A cooking assistance system for patients with Alzheimers disease using reinforcement learning. Int J Inf Technol 23(2), pp. 1–11
48.
Zurück zum Zitat Hou R, Chen C, Shah M (2017) An end-to-end 3D convolutional neural network for action detection and segmentation in videos. CoRR arXiv:1712.01111 Hou R, Chen C, Shah M (2017) An end-to-end 3D convolutional neural network for action detection and segmentation in videos. CoRR arXiv:​1712.​01111
49.
52.
Zurück zum Zitat Hendrycks D, Gimpel K (2016) Bridging nonlinearities and stochastic regularizers with Gaussian error linear units. CoRR arXiv:1606.08415 Hendrycks D, Gimpel K (2016) Bridging nonlinearities and stochastic regularizers with Gaussian error linear units. CoRR arXiv:​1606.​08415
53.
Zurück zum Zitat Karakostas A, Briassouli A, Avgerinakis K, Kompatsiaris I, Tsolaki M (2016) The dem@care experiments and datasets: a technical report Karakostas A, Briassouli A, Avgerinakis K, Kompatsiaris I, Tsolaki M (2016) The dem@care experiments and datasets: a technical report
54.
Zurück zum Zitat Avgerinakis K, Briassouli A, Kompatsiaris Y (2016) Activity detection using sequential statistical boundary detection (SSBD). Comput Vis Image Underst 144:46–61CrossRef Avgerinakis K, Briassouli A, Kompatsiaris Y (2016) Activity detection using sequential statistical boundary detection (SSBD). Comput Vis Image Underst 144:46–61CrossRef
55.
Zurück zum Zitat Poularakis S, Avgerinakis K, Briassouli A, Kompatsiaris Y (2017) Efficient motion estimation methods for fast recognition of activities of daily living. Signal Process Image Commun 53:1–12CrossRef Poularakis S, Avgerinakis K, Briassouli A, Kompatsiaris Y (2017) Efficient motion estimation methods for fast recognition of activities of daily living. Signal Process Image Commun 53:1–12CrossRef
Metadaten
Titel
Deep-learning-based human activity recognition for Alzheimer’s patients’ daily life activities assistance
verfasst von
Ahmed Snoun
Tahani Bouchrika
Olfa Jemai
Publikationsdatum
04.10.2022
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 2/2023
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-022-07883-1

Weitere Artikel der Ausgabe 2/2023

Neural Computing and Applications 2/2023 Zur Ausgabe

Premium Partner