Skip to main content

2021 | OriginalPaper | Buchkapitel

Fusion of Multimodal Sensor Data for Effective Human Action Recognition in the Service of Medical Platforms

verfasst von : Panagiotis Giannakeris, Athina Tsanousa, Thanasis Mavropoulos, Georgios Meditskos, Konstantinos Ioannidis, Stefanos Vrochidis, Ioannis Kompatsiaris

Erschienen in: MultiMedia Modeling

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In what has arguably been one of the most troubling periods of recent medical history, with a global pandemic emphasising the importance of staying healthy, innovative tools that shelter patient well-being gain momentum. In that view, a framework is proposed that leverages multimodal data, namely inertial and depth sensor-originating data, can be integrated in health care-oriented platforms, and tackles the crucial task of human action recognition (HAR). To analyse person movement and consequently assess the patient’s condition, an effective methodology is presented that is two-fold: initially, Kinect-based action representations are constructed from handcrafted 3DHOG depth features and the descriptive power of a Fisher encoding scheme. This is complemented by wearable sensor data analysis, using time domain features and then boosted by exploring fusion strategies of minimum expense. Finally, an extended experimental process reveals competitive results in a well-known benchmark dataset and indicates the applicability of our methodology for HAR.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Avci, A., Bosch, S., Marin-Perianu, M., Marin-Perianu, R., Havinga, P.: Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: a survey. In: 23th International Conference on Architecture of Computing Systems 2010, pp. 1–10. VDE (2010) Avci, A., Bosch, S., Marin-Perianu, M., Marin-Perianu, R., Havinga, P.: Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: a survey. In: 23th International Conference on Architecture of Computing Systems 2010, pp. 1–10. VDE (2010)
2.
Zurück zum Zitat Benser, E.T.: Trends in inertial sensors and applications. In: 2015 IEEE International Symposium on Inertial Sensors and Systems (ISISS) Proceedings, pp. 1–4 (2015) Benser, E.T.: Trends in inertial sensors and applications. In: 2015 IEEE International Symposium on Inertial Sensors and Systems (ISISS) Proceedings, pp. 1–4 (2015)
3.
Zurück zum Zitat Chen, C., Jafari, R., Kehtarnavaz, N.: Improving human action recognition using fusion of depth camera and inertial sensors. IEEE Trans. Hum.-Mach. Syst. 45(1), 51–61 (2015)CrossRef Chen, C., Jafari, R., Kehtarnavaz, N.: Improving human action recognition using fusion of depth camera and inertial sensors. IEEE Trans. Hum.-Mach. Syst. 45(1), 51–61 (2015)CrossRef
4.
Zurück zum Zitat Chen, C., Jafari, R., Kehtarnavaz, N.: A real-time human action recognition system using depth and inertial sensor fusion. IEEE Sens. J. 16(3), 773–781 (2015)CrossRef Chen, C., Jafari, R., Kehtarnavaz, N.: A real-time human action recognition system using depth and inertial sensor fusion. IEEE Sens. J. 16(3), 773–781 (2015)CrossRef
5.
Zurück zum Zitat Chen, C., Jafari, R., Kehtarnavaz, N.: UTD-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 168–172. IEEE (2015) Chen, C., Jafari, R., Kehtarnavaz, N.: UTD-MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor. In: 2015 IEEE International Conference on Image Processing (ICIP), pp. 168–172. IEEE (2015)
7.
Zurück zum Zitat Chen, C., Liu, M., Zhang, B., Han, J., Jiang, J., Liu, H.: 3D action recognition using multi-temporal depth motion maps and fisher vector. In: IJCAI, pp. 3331–3337 (2016) Chen, C., Liu, M., Zhang, B., Han, J., Jiang, J., Liu, H.: 3D action recognition using multi-temporal depth motion maps and fisher vector. In: IJCAI, pp. 3331–3337 (2016)
8.
Zurück zum Zitat Chen, L., Wei, H., Ferryman, J.: A survey of human motion analysis using depth imagery. Pattern Recogn. Lett. 34(15), 1995–2006 (2013)CrossRef Chen, L., Wei, H., Ferryman, J.: A survey of human motion analysis using depth imagery. Pattern Recogn. Lett. 34(15), 1995–2006 (2013)CrossRef
9.
Zurück zum Zitat Chen, Y., Le, D., Yumak, Z., Pu, P.: EHR: a sensing technology readiness model for lifestyle changes. Mob. Netw. Appl. 22(3), 478–492 (2017)CrossRef Chen, Y., Le, D., Yumak, Z., Pu, P.: EHR: a sensing technology readiness model for lifestyle changes. Mob. Netw. Appl. 22(3), 478–492 (2017)CrossRef
11.
Zurück zum Zitat Dawar, N., Ostadabbas, S., Kehtarnavaz, N.: Data augmentation in deep learning-based fusion of depth and inertial sensing for action recognition. IEEE Sens. Lett. 3(1), 1–4 (2019)CrossRef Dawar, N., Ostadabbas, S., Kehtarnavaz, N.: Data augmentation in deep learning-based fusion of depth and inertial sensing for action recognition. IEEE Sens. Lett. 3(1), 1–4 (2019)CrossRef
12.
Zurück zum Zitat Dawar, N., Ostadabbas, S., Kehtarnavaz, N.: Data augmentation in deep learning-based fusion of depth and inertial sensing for action recognition. IEEE Sens. Lett. 3(1), 1–4 (2018)CrossRef Dawar, N., Ostadabbas, S., Kehtarnavaz, N.: Data augmentation in deep learning-based fusion of depth and inertial sensing for action recognition. IEEE Sens. Lett. 3(1), 1–4 (2018)CrossRef
13.
Zurück zum Zitat Delachaux, B., Rebetez, J., Perez-Uribe, A., Satizábal Mejia, H.F.: Indoor activity recognition by combining one-vs.-all neural network classifiers exploiting wearable and depth sensors. In: Rojas, I., Joya, G., Cabestany, J. (eds.) IWANN 2013. LNCS, vol. 7903, pp. 216–223. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38682-4_25CrossRef Delachaux, B., Rebetez, J., Perez-Uribe, A., Satizábal Mejia, H.F.: Indoor activity recognition by combining one-vs.-all neural network classifiers exploiting wearable and depth sensors. In: Rojas, I., Joya, G., Cabestany, J. (eds.) IWANN 2013. LNCS, vol. 7903, pp. 216–223. Springer, Heidelberg (2013). https://​doi.​org/​10.​1007/​978-3-642-38682-4_​25CrossRef
14.
Zurück zum Zitat Ehatisham-Ul-Haq, M., et al.: Robust human activity recognition using multimodal feature-level fusion. IEEE Access 7, 60736–60751 (2019)CrossRef Ehatisham-Ul-Haq, M., et al.: Robust human activity recognition using multimodal feature-level fusion. IEEE Access 7, 60736–60751 (2019)CrossRef
15.
Zurück zum Zitat Elmadany, N.E.D., He, Y., Guan, L.: Human action recognition using hybrid centroid canonical correlation analysis. In: 2015 IEEE International Symposium on Multimedia (ISM), pp. 205–210. IEEE (2015) Elmadany, N.E.D., He, Y., Guan, L.: Human action recognition using hybrid centroid canonical correlation analysis. In: 2015 IEEE International Symposium on Multimedia (ISM), pp. 205–210. IEEE (2015)
16.
Zurück zum Zitat Kwolek, B., Kepski, M.: Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Programs Biomed. 117(3), 489–501 (2014)CrossRef Kwolek, B., Kepski, M.: Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Programs Biomed. 117(3), 489–501 (2014)CrossRef
17.
Zurück zum Zitat Lane, N.D., et al.: Bewell: sensing sleep, physical activities and social interactions to promote wellbeing. Mob. Netw. Appl. 19(3), 345–359 (2014)MathSciNetCrossRef Lane, N.D., et al.: Bewell: sensing sleep, physical activities and social interactions to promote wellbeing. Mob. Netw. Appl. 19(3), 345–359 (2014)MathSciNetCrossRef
18.
Zurück zum Zitat Lara, O.D., Labrador, M.A.: A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 15(3), 1192–1209 (2012)CrossRef Lara, O.D., Labrador, M.A.: A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 15(3), 1192–1209 (2012)CrossRef
19.
Zurück zum Zitat Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3D points. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 9–14. IEEE (2010) Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3D points. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 9–14. IEEE (2010)
20.
Zurück zum Zitat Liu, K., Chen, C., Jafari, R., Kehtarnavaz, N.: Fusion of inertial and depth sensor data for robust hand gesture recognition. IEEE Sens. J. 14(6), 1898–1903 (2014)CrossRef Liu, K., Chen, C., Jafari, R., Kehtarnavaz, N.: Fusion of inertial and depth sensor data for robust hand gesture recognition. IEEE Sens. J. 14(6), 1898–1903 (2014)CrossRef
21.
Zurück zum Zitat Liu, L., Shao, L.: Learning discriminative representations from RGB-D video data. In: Twenty-Third International Joint Conference on Artificial Intelligence (2013) Liu, L., Shao, L.: Learning discriminative representations from RGB-D video data. In: Twenty-Third International Joint Conference on Artificial Intelligence (2013)
22.
Zurück zum Zitat Masum, A.K.M., Bahadur, E.H., Shan-A-Alahi, A., Uz Zaman Chowdhury, M.A., Uddin, M.R., Al Noman, A.: Human activity recognition using accelerometer, gyroscope and magnetometer sensors: deep neural network approaches. In: 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), pp. 1–6 (2019) Masum, A.K.M., Bahadur, E.H., Shan-A-Alahi, A., Uz Zaman Chowdhury, M.A., Uddin, M.R., Al Noman, A.: Human activity recognition using accelerometer, gyroscope and magnetometer sensors: deep neural network approaches. In: 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT), pp. 1–6 (2019)
23.
Zurück zum Zitat Mavropoulos, T., et al.: A smart dialogue-competent monitoring framework supporting people in rehabilitation. In: Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, pp. 499–508 (2019) Mavropoulos, T., et al.: A smart dialogue-competent monitoring framework supporting people in rehabilitation. In: Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, pp. 499–508 (2019)
24.
Zurück zum Zitat Munson, S.A., Consolvo, S.: Exploring goal-setting, rewards, self-monitoring, and sharing to motivate physical activity. In: 2012 6th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) and Workshops, pp. 25–32. IEEE (2012) Munson, S.A., Consolvo, S.: Exploring goal-setting, rewards, self-monitoring, and sharing to motivate physical activity. In: 2012 6th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) and Workshops, pp. 25–32. IEEE (2012)
25.
Zurück zum Zitat Ramasamy Ramamurthy, S., Roy, N.: Recent trends in machine learning for human activity recognition-a survey. Wiley Interdisc. Rev. Data Mining Knowl. Discov. 8(4), e1254 (2018)CrossRef Ramasamy Ramamurthy, S., Roy, N.: Recent trends in machine learning for human activity recognition-a survey. Wiley Interdisc. Rev. Data Mining Knowl. Discov. 8(4), e1254 (2018)CrossRef
26.
Zurück zum Zitat Shaeffer, D.K.: Mems inertial sensors: a tutorial overview. IEEE Commun. Mag. 51(4), 100–109 (2013)CrossRef Shaeffer, D.K.: Mems inertial sensors: a tutorial overview. IEEE Commun. Mag. 51(4), 100–109 (2013)CrossRef
27.
Zurück zum Zitat Sidor, K., Wysocki, M.: Recognition of human activities using depth maps and the viewpoint feature histogram descriptor. Sensors 20(10), 2940 (2020)CrossRef Sidor, K., Wysocki, M.: Recognition of human activities using depth maps and the viewpoint feature histogram descriptor. Sensors 20(10), 2940 (2020)CrossRef
28.
Zurück zum Zitat Uijlings, J.R., Duta, I.C., Rostamzadeh, N., Sebe, N.: Realtime video classification using dense HOF/HOG. In: Proceedings of International Conference on Multimedia Retrieval, pp. 145–152 (2014) Uijlings, J.R., Duta, I.C., Rostamzadeh, N., Sebe, N.: Realtime video classification using dense HOF/HOG. In: Proceedings of International Conference on Multimedia Retrieval, pp. 145–152 (2014)
29.
Zurück zum Zitat Wang, H., Schmid, C.: Action recognition with improved trajectories. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), December 2013 Wang, H., Schmid, C.: Action recognition with improved trajectories. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), December 2013
30.
Zurück zum Zitat Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1290–1297. IEEE (2012) Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1290–1297. IEEE (2012)
31.
Zurück zum Zitat Wang, J., Chen, Y., Hao, S., Peng, X., Hu, L.: Deep learning for sensor-based activity recognition: a survey. Pattern Recogn. Lett. 119, 3–11 (2019)CrossRef Wang, J., Chen, Y., Hao, S., Peng, X., Hu, L.: Deep learning for sensor-based activity recognition: a survey. Pattern Recogn. Lett. 119, 3–11 (2019)CrossRef
32.
Zurück zum Zitat Weiyao, X., Muqing, W., Min, Z., Yifeng, L., Bo, L., Ting, X.: Human action recognition using multilevel depth motion maps. IEEE Access 7, 41811–41822 (2019)CrossRef Weiyao, X., Muqing, W., Min, Z., Yifeng, L., Bo, L., Ting, X.: Human action recognition using multilevel depth motion maps. IEEE Access 7, 41811–41822 (2019)CrossRef
33.
Zurück zum Zitat Wong, C., McKeague, S., Correa, J., Liu, J., Yang, G.Z.: Enhanced classification of abnormal gait using BSN and depth. In: 2012 Ninth International Conference on Wearable and Implantable Body Sensor Networks, pp. 166–171. IEEE (2012) Wong, C., McKeague, S., Correa, J., Liu, J., Yang, G.Z.: Enhanced classification of abnormal gait using BSN and depth. In: 2012 Ninth International Conference on Wearable and Implantable Body Sensor Networks, pp. 166–171. IEEE (2012)
34.
Zurück zum Zitat Zhang, B., Yang, Y., Chen, C., Yang, L., Han, J., Shao, L.: Action recognition using 3D histograms of texture and a multi-class boosting classifier. IEEE Trans. Image Process. 26(10), 4648–4660 (2017)MathSciNetCrossRef Zhang, B., Yang, Y., Chen, C., Yang, L., Han, J., Shao, L.: Action recognition using 3D histograms of texture and a multi-class boosting classifier. IEEE Trans. Image Process. 26(10), 4648–4660 (2017)MathSciNetCrossRef
Metadaten
Titel
Fusion of Multimodal Sensor Data for Effective Human Action Recognition in the Service of Medical Platforms
verfasst von
Panagiotis Giannakeris
Athina Tsanousa
Thanasis Mavropoulos
Georgios Meditskos
Konstantinos Ioannidis
Stefanos Vrochidis
Ioannis Kompatsiaris
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-67835-7_31

Premium Partner