Skip to main content
Erschienen in: Machine Vision and Applications 5/2018

03.05.2018 | Original Paper

Action detection fusing multiple Kinects and a WIMU: an application to in-home assistive technology for the elderly

verfasst von: Albert Clapés, Àlex Pardo, Oriol Pujol Vila, Sergio Escalera

Erschienen in: Machine Vision and Applications | Ausgabe 5/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We present a vision-inertial system which combines two RGB-Depth devices together with a wearable inertial movement unit in order to detect activities of the daily living. From multi-view videos, we extract dense trajectories enriched with a histogram of normals description computed from the depth cue and bag them into multi-view codebooks. During the later classification step a multi-class support vector machine with a RBF-\(\mathcal {X}^2\) kernel combines the descriptions at kernel level. In order to perform action detection from the videos, a sliding window approach is utilized. On the other hand, we extract accelerations, rotation angles, and jerk features from the inertial data collected by the wearable placed on the user’s dominant wrist. During gesture spotting, a dynamic time warping is applied and the aligning costs to a set of pre-selected gesture sub-classes are thresholded to determine possible detections. The outputs of the two modules are combined in a late-fusion fashion. The system is validated in a real-case scenario with elderly from an elder home. Learning-based fusion results improve the ones from the single modalities, demonstrating the success of such multimodal approach.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
This is not stated in the published manuscript, but in an errata document. Check [46] and the errata document for more detail: http://​jhmdb.​is.​tue.​mpg.​de/​show_​file?​filename=​Errata_​JHMDB_​ICCV_​2013.​pdf.
 
2
The concatenation of the Viewpoint Feature Histogram (VFH) and Camera Roll Histogram (CRH).
 
3
Models are a class-representative instance that will be used to compare test gestures in order to compute DTW matrices, see Sect. 4.2.4 for more details.
 
4
Dense optical flow is computed using [32].
 
5
These two quantities define the size of the dynamic time warping matrix, i.e., \(l_g \times l_M\)
 
Literatur
1.
Zurück zum Zitat Adlam, T., Faulkner, R., Orpwood, R., Jones, K., Macijauskiene, J., Budraitiene, A.: The installation and support of internationally distributed equipment for people with dementia. IEEE Trans. Inf. Technol. Biomed. 8(3), 253–257 (2004)CrossRef Adlam, T., Faulkner, R., Orpwood, R., Jones, K., Macijauskiene, J., Budraitiene, A.: The installation and support of internationally distributed equipment for people with dementia. IEEE Trans. Inf. Technol. Biomed. 8(3), 253–257 (2004)CrossRef
2.
Zurück zum Zitat Akl, A., Valaee, S.: Accelerometer-based gesture recognition via dynamic-time warping, affinity propagation, & compressive sensing. In: 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 2270–2273. IEEE (2010) Akl, A., Valaee, S.: Accelerometer-based gesture recognition via dynamic-time warping, affinity propagation, & compressive sensing. In: 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 2270–2273. IEEE (2010)
3.
Zurück zum Zitat Alexandre, L.A.: 3D descriptors for object and category recognition: a comparative evaluation. In: Workshop on Color-Depth Camera Fusion in Robotics at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, vol. 1. Citeseer (2012) Alexandre, L.A.: 3D descriptors for object and category recognition: a comparative evaluation. In: Workshop on Color-Depth Camera Fusion in Robotics at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, vol. 1. Citeseer (2012)
4.
Zurück zum Zitat Amft, O., Junker, H., Troster, G.: Detection of eating and drinking arm gestures using inertial body-worn sensors. In: Proceedings of the 9th IEEE International Symposium on Wearable Computers, 2005, pp. 160–163 (2005) Amft, O., Junker, H., Troster, G.: Detection of eating and drinking arm gestures using inertial body-worn sensors. In: Proceedings of the 9th IEEE International Symposium on Wearable Computers, 2005, pp. 160–163 (2005)
5.
Zurück zum Zitat Avci, A., Bosch, S., Marin-Perianu, M., Marin-Perianu, R., Havinga, P.: Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: A survey. In: 2010 23rd International Conference on Architecture of computing systems (ARCS), pp. 1–10. VDE (2010) Avci, A., Bosch, S., Marin-Perianu, M., Marin-Perianu, R., Havinga, P.: Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: A survey. In: 2010 23rd International Conference on Architecture of computing systems (ARCS), pp. 1–10. VDE (2010)
6.
Zurück zum Zitat Bagalà, F., Becker, C., Cappello, A., Chiari, L., Aminian, K., Hausdorff, J.M., Zijlstra, W., Klenk, J.: Evaluation of accelerometer-based fall detection algorithms on real-world falls. PLoS One 7(5), e37,062 (2012)CrossRef Bagalà, F., Becker, C., Cappello, A., Chiari, L., Aminian, K., Hausdorff, J.M., Zijlstra, W., Klenk, J.: Evaluation of accelerometer-based fall detection algorithms on real-world falls. PLoS One 7(5), e37,062 (2012)CrossRef
7.
Zurück zum Zitat Bagheri, M., Gao, Q., Escalera, S., Clapes, A., Nasrollahi, K., Holte, M.B., Moeslund, T.B.: Keep it accurate and diverse: Enhancing action recognition performance by ensemble learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 22–29 (2015) Bagheri, M., Gao, Q., Escalera, S., Clapes, A., Nasrollahi, K., Holte, M.B., Moeslund, T.B.: Keep it accurate and diverse: Enhancing action recognition performance by ensemble learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 22–29 (2015)
8.
Zurück zum Zitat Banerjee, T., Keller, J.M., Skubic, M., Stone, E.: Day or night activity recognition from video using fuzzy clustering techniques. IEEE Trans. Fuzzy Syst. 22(3), 483–493 (2014)CrossRef Banerjee, T., Keller, J.M., Skubic, M., Stone, E.: Day or night activity recognition from video using fuzzy clustering techniques. IEEE Trans. Fuzzy Syst. 22(3), 483–493 (2014)CrossRef
9.
Zurück zum Zitat Bao, L., Intille, S.S.: Activity recognition from user-annotated acceleration data. In: Pervasive Computing, pp. 1–17. Springer (2004) Bao, L., Intille, S.S.: Activity recognition from user-annotated acceleration data. In: Pervasive Computing, pp. 1–17. Springer (2004)
10.
Zurück zum Zitat Barbosa, I.B., Cristani, M., Del Bue, A., Bazzani, L., Murino, V.: Re-identification with rgb-d sensors. In: Computer Vision–ECCV 2012. Workshops and Demonstrations, pp. 433–442. Springer (2012) Barbosa, I.B., Cristani, M., Del Bue, A., Bazzani, L., Murino, V.: Re-identification with rgb-d sensors. In: Computer Vision–ECCV 2012. Workshops and Demonstrations, pp. 433–442. Springer (2012)
11.
Zurück zum Zitat Bautista, M.A., Hernández-Vela, A., Ponce, V., Perez-Sala, X., Baró, X., Pujol, O., Angulo, C., Escalera, S.: Probability-based dynamic time warping for gesture recognition on rgb-d data. In: Advances in depth image analysis and applications, pp. 126–135. Springer (2013) Bautista, M.A., Hernández-Vela, A., Ponce, V., Perez-Sala, X., Baró, X., Pujol, O., Angulo, C., Escalera, S.: Probability-based dynamic time warping for gesture recognition on rgb-d data. In: Advances in depth image analysis and applications, pp. 126–135. Springer (2013)
12.
Zurück zum Zitat Ben Hadj Mohamed, A., Val, T., Andrieux, L., Kachouri, A.: Assisting people with disabilities through kinect sensors into a smart house. In: 2013 International Conference on Computer Medical Applications (ICCMA), pp. 1–5. IEEE (2013) Ben Hadj Mohamed, A., Val, T., Andrieux, L., Kachouri, A.: Assisting people with disabilities through kinect sensors into a smart house. In: 2013 International Conference on Computer Medical Applications (ICCMA), pp. 1–5. IEEE (2013)
13.
Zurück zum Zitat Blank, M., Gorelick, L., Shechtman, E., Irani, M., Basri, R.: Actions as space-time shapes. In: Tenth IEEE International Conference on Computer Vision, 2005. ICCV 2005. vol. 2, pp. 1395–1402. IEEE (2005) Blank, M., Gorelick, L., Shechtman, E., Irani, M., Basri, R.: Actions as space-time shapes. In: Tenth IEEE International Conference on Computer Vision, 2005. ICCV 2005. vol. 2, pp. 1395–1402. IEEE (2005)
14.
Zurück zum Zitat Bo, A., Hayashibe, M., Poignet, P.: Joint angle estimation in rehabilitation with inertial sensors and its integration with kinect. In: EMBC’11: 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3479–3483. IEEE (2011) Bo, A., Hayashibe, M., Poignet, P.: Joint angle estimation in rehabilitation with inertial sensors and its integration with kinect. In: EMBC’11: 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3479–3483. IEEE (2011)
15.
Zurück zum Zitat Bobick, A.F., Davis, J.W.: The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 23(3), 257–267 (2001)CrossRef Bobick, A.F., Davis, J.W.: The recognition of human movement using temporal templates. IEEE Trans. Pattern Anal. Mach. Intell. 23(3), 257–267 (2001)CrossRef
16.
Zurück zum Zitat Booranrom, Y., Watanapa, B., Mongkolnam, P.: Smart bedroom for elderly using kinect. In: Computer Science and Engineering Conference (ICSEC), 2014 International, pp. 427–432. IEEE (2014) Booranrom, Y., Watanapa, B., Mongkolnam, P.: Smart bedroom for elderly using kinect. In: Computer Science and Engineering Conference (ICSEC), 2014 International, pp. 427–432. IEEE (2014)
17.
Zurück zum Zitat Botia, J.A., Villa, A., Palma, J.: Ambient assisted living system for in-home monitoring of healthy independent elders. Expert Syst. Appl. 39(9), 8136–8148 (2012)CrossRef Botia, J.A., Villa, A., Palma, J.: Ambient assisted living system for in-home monitoring of healthy independent elders. Expert Syst. Appl. 39(9), 8136–8148 (2012)CrossRef
18.
Zurück zum Zitat Bouchard, K., Bilodeau, J.S., Fortin-Simard, D., Gaboury, S., Bouchard, B., Bouzouane, A.: Human activity recognition in smart homes based on passive RFID localization. In: Proceedings of the 7th International Conference on Pervasive Technologies Related to Assistive Environments, p 1 (2014) Bouchard, K., Bilodeau, J.S., Fortin-Simard, D., Gaboury, S., Bouchard, B., Bouzouane, A.: Human activity recognition in smart homes based on passive RFID localization. In: Proceedings of the 7th International Conference on Pervasive Technologies Related to Assistive Environments, p 1 (2014)
19.
Zurück zum Zitat Brendel, W., Todorovic, S.: Activities as time series of human postures. In: Computer Vision–ECCV 2010, pp. 721–734. Springer (2010) Brendel, W., Todorovic, S.: Activities as time series of human postures. In: Computer Vision–ECCV 2010, pp. 721–734. Springer (2010)
20.
Zurück zum Zitat Bulling, A., Blanke, U.L.F., Schiele, B.: A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. 46(3), 1–33 (2014)CrossRef Bulling, A., Blanke, U.L.F., Schiele, B.: A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. 46(3), 1–33 (2014)CrossRef
21.
Zurück zum Zitat Casale, P.: Approximate ensemble methods for physical activity recognition applications. ELCVIA 13(2), 22–23 (2014)CrossRef Casale, P.: Approximate ensemble methods for physical activity recognition applications. ELCVIA 13(2), 22–23 (2014)CrossRef
22.
Zurück zum Zitat Chang, S.F., Ellis, D., Jiang, W., Lee, K., Yanagawa, A., Loui, A.C., Luo, J.: Large-scale multimodal semantic concept detection for consumer video. In: Proceedings of the International Workshop on Workshop on Multimedia Information Retrieval, pp. 255–264. ACM (2007) Chang, S.F., Ellis, D., Jiang, W., Lee, K., Yanagawa, A., Loui, A.C., Luo, J.: Large-scale multimodal semantic concept detection for consumer video. In: Proceedings of the International Workshop on Workshop on Multimedia Information Retrieval, pp. 255–264. ACM (2007)
23.
Zurück zum Zitat Chattopadhyay, P., Roy, A., Sural, S., Mukhopadhyay, J.: Pose depth volume extraction from rgb-d streams for frontal gait recognition. J. Vis. Commun. Image Represent. 25(1), 53–63 (2014)CrossRef Chattopadhyay, P., Roy, A., Sural, S., Mukhopadhyay, J.: Pose depth volume extraction from rgb-d streams for frontal gait recognition. J. Vis. Commun. Image Represent. 25(1), 53–63 (2014)CrossRef
24.
Zurück zum Zitat Chen, C.C., Aggarwal, J.: Modeling human activities as speech. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3425–3432. IEEE (2011) Chen, C.C., Aggarwal, J.: Modeling human activities as speech. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3425–3432. IEEE (2011)
25.
Zurück zum Zitat Clapés, A., Reyes, M., Escalera, S.: Multi-modal user identification and object recognition surveillance system. Pattern Recognit. Lett. 34(7), 799–808 (2013)CrossRef Clapés, A., Reyes, M., Escalera, S.: Multi-modal user identification and object recognition surveillance system. Pattern Recognit. Lett. 34(7), 799–808 (2013)CrossRef
26.
Zurück zum Zitat Crispim, C.F., Bathrinarayanan, V., Fosty, B., Konig, A., Romdhane, R., Thonnat, M., Bremond, F.: Evaluation of a monitoring system for event recognition of older people. In: 10th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 165–170. IEEE (2013) Crispim, C.F., Bathrinarayanan, V., Fosty, B., Konig, A., Romdhane, R., Thonnat, M., Bremond, F.: Evaluation of a monitoring system for event recognition of older people. In: 10th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 165–170. IEEE (2013)
28.
Zurück zum Zitat Delachaux, B., Rebetez, J., Perez-Uribe, A., Mejia, H.F.S.: Indoor activity recognition by combining one-vs.-all neural network classifiers exploiting wearable and depth sensors. In: Advances in Computational Intelligence, pp. 216–223. Springer (2013) Delachaux, B., Rebetez, J., Perez-Uribe, A., Mejia, H.F.S.: Indoor activity recognition by combining one-vs.-all neural network classifiers exploiting wearable and depth sensors. In: Advances in Computational Intelligence, pp. 216–223. Springer (2013)
29.
Zurück zum Zitat Dell’Acqua, P., Klompstra, L.V., Jaarsma, T., Samini, A.: An assistive tool for monitoring physical activities in older adults. In: 2013 IEEE 2nd International Conference on Serious Games and Applications for Health (SeGAH), pp. 1–6. IEEE (2013) Dell’Acqua, P., Klompstra, L.V., Jaarsma, T., Samini, A.: An assistive tool for monitoring physical activities in older adults. In: 2013 IEEE 2nd International Conference on Serious Games and Applications for Health (SeGAH), pp. 1–6. IEEE (2013)
30.
Zurück zum Zitat Dubois, A., Charpillet, F.: Human activities recognition with rgb-depth camera using hmm. In: 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 4666–4669. IEEE (2013) Dubois, A., Charpillet, F.: Human activities recognition with rgb-depth camera using hmm. In: 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 4666–4669. IEEE (2013)
31.
Zurück zum Zitat Escalera, S., Bar, X., Gonzlez, J., Bautista, M.A., Madadi, M., Reyes, M., Ponce, V., Escalante, H.J., Shotton, J., Guyon, I.: Chalearn looking at people challenge 2014: dataset and results. In: ECCV Workshops (2014) Escalera, S., Bar, X., Gonzlez, J., Bautista, M.A., Madadi, M., Reyes, M., Ponce, V., Escalante, H.J., Shotton, J., Guyon, I.: Chalearn looking at people challenge 2014: dataset and results. In: ECCV Workshops (2014)
32.
Zurück zum Zitat Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Image analysis, pp. 363–370. Springer (2003) Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Image analysis, pp. 363–370. Springer (2003)
34.
Zurück zum Zitat Fernandez-Sanchez, E.J., Diaz, J., Ros, E.: Background subtraction based on color and depth using active sensors. Sensors 13(7), 8895–8915 (2013)CrossRef Fernandez-Sanchez, E.J., Diaz, J., Ros, E.: Background subtraction based on color and depth using active sensors. Sensors 13(7), 8895–8915 (2013)CrossRef
35.
Zurück zum Zitat Gaidon, A., Harchaoui, Z., Schmid, C.: Temporal localization of actions with actoms. IEEE Trans. Pattern Anal. Mach. Intell. 35(11), 2782–2795 (2013)CrossRef Gaidon, A., Harchaoui, Z., Schmid, C.: Temporal localization of actions with actoms. IEEE Trans. Pattern Anal. Mach. Intell. 35(11), 2782–2795 (2013)CrossRef
38.
Zurück zum Zitat Golby, C., Raja, V., Hundt, G.L., Badiyani, S.: A low cost ‘activities of daily living’ assessment system for the continual assessment of post-stroke patients, from inpatient/outpatient rehabilitation through to telerehabilitation. Successes and Failures in Telehealth. http://wrap.warwick.ac.uk/42341/ (2011) Golby, C., Raja, V., Hundt, G.L., Badiyani, S.: A low cost ‘activities of daily living’ assessment system for the continual assessment of post-stroke patients, from inpatient/outpatient rehabilitation through to telerehabilitation. Successes and Failures in Telehealth. http://​wrap.​warwick.​ac.​uk/​42341/​ (2011)
40.
Zurück zum Zitat Helten, T., Muller, M., Seidel, H.P., Theobalt, C.: Real-time body tracking with one depth camera and inertial sensors. In: 2013 IEEE International Conference on computer vision (ICCV), pp. 1105–1112. IEEE (2013) Helten, T., Muller, M., Seidel, H.P., Theobalt, C.: Real-time body tracking with one depth camera and inertial sensors. In: 2013 IEEE International Conference on computer vision (ICCV), pp. 1105–1112. IEEE (2013)
41.
Zurück zum Zitat Hernández-Vela, A., Bautista, M.A., Perez-Sala, X., Ponce, V., Baró, X., Pujol, O., Angulo, C., Escalera, S.: Bovdw: bag-of-visual-and-depth-words for gesture recognition. In: 2012 21st international conference on pattern recognition (ICPR), pp. 449–452. IEEE (2012) Hernández-Vela, A., Bautista, M.A., Perez-Sala, X., Ponce, V., Baró, X., Pujol, O., Angulo, C., Escalera, S.: Bovdw: bag-of-visual-and-depth-words for gesture recognition. In: 2012 21st international conference on pattern recognition (ICPR), pp. 449–452. IEEE (2012)
42.
Zurück zum Zitat Hondori, H.M., Khademi, M., Lopes, C.V.: Monitoring intake gestures using sensor fusion (microsoft kinect and inertial sensors) for smart home tele-rehab setting. In: 2012 1st Annual IEEE Healthcare Innovation Conference (2012) Hondori, H.M., Khademi, M., Lopes, C.V.: Monitoring intake gestures using sensor fusion (microsoft kinect and inertial sensors) for smart home tele-rehab setting. In: 2012 1st Annual IEEE Healthcare Innovation Conference (2012)
43.
Zurück zum Zitat Hongeng, S., Nevatia, R., Bremond, F.: Video-based event recognition: activity representation and probabilistic recognition methods. Comput. Vis. Image Underst. 96(2), 129–162 (2004)CrossRef Hongeng, S., Nevatia, R., Bremond, F.: Video-based event recognition: activity representation and probabilistic recognition methods. Comput. Vis. Image Underst. 96(2), 129–162 (2004)CrossRef
44.
Zurück zum Zitat Jafari, R., Li, W., Bajcsy, R., Glaser, S., Sastry, S.: Physical activity monitoring for assisted living at home. In: 4th International Workshop on Wearable and Implantable Body Sensor Networks (BSN 2007), pp. 213–219. Springer (2007) Jafari, R., Li, W., Bajcsy, R., Glaser, S., Sastry, S.: Physical activity monitoring for assisted living at home. In: 4th International Workshop on Wearable and Implantable Body Sensor Networks (BSN 2007), pp. 213–219. Springer (2007)
45.
Zurück zum Zitat Jain, M., Van Gemert, J., Jégou, H., Bouthemy, P., Snoek, C.G.: Action localization with tubelets from motion. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 740–747. IEEE (2014) Jain, M., Van Gemert, J., Jégou, H., Bouthemy, P., Snoek, C.G.: Action localization with tubelets from motion. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 740–747. IEEE (2014)
46.
Zurück zum Zitat Jhuang, H., Gall, J., Zuffi, S., Schmid, C., Black, M.J.: Towards understanding action recognition. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 3192–3199. IEEE (2013) Jhuang, H., Gall, J., Zuffi, S., Schmid, C., Black, M.J.: Towards understanding action recognition. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 3192–3199. IEEE (2013)
48.
Zurück zum Zitat Karantonis, D.M., Narayanan, M.R., Mathie, M., Lovell, N.H., Celler, B.G.: Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Trans. Inf. Technol. Biomed. 10(1), 156–167 (2006)CrossRef Karantonis, D.M., Narayanan, M.R., Mathie, M., Lovell, N.H., Celler, B.G.: Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Trans. Inf. Technol. Biomed. 10(1), 156–167 (2006)CrossRef
49.
Zurück zum Zitat Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: 2014 IEEE conference on computer vision and pattern recognition (CVPR), pp. 1725–1732. IEEE (2014) Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: 2014 IEEE conference on computer vision and pattern recognition (CVPR), pp. 1725–1732. IEEE (2014)
50.
Zurück zum Zitat Ke, Y., Sukthankar, R., Hebert, M.: Volumetric features for video event detection. Int. J. Comput. Vis. 88(3), 339–362 (2010)MathSciNetCrossRef Ke, Y., Sukthankar, R., Hebert, M.: Volumetric features for video event detection. Int. J. Comput. Vis. 88(3), 339–362 (2010)MathSciNetCrossRef
51.
Zurück zum Zitat Kim, J., Yang, S., Gerla, M.: Stroketrack: wireless inertial motion tracking of human arms for stroke telerehabilitation. In: Proceedings of the First ACM Workshop on Mobile Systems, Applications, and Services for Healthcare, p. 4. ACM (2011) Kim, J., Yang, S., Gerla, M.: Stroketrack: wireless inertial motion tracking of human arms for stroke telerehabilitation. In: Proceedings of the First ACM Workshop on Mobile Systems, Applications, and Services for Healthcare, p. 4. ACM (2011)
52.
Zurück zum Zitat Kim, T.K., Cipolla, R.: Canonical correlation analysis of video volume tensors for action categorization and detection. IEEE Trans. Pattern Anal. Mach. Intell. 31(8), 1415–1428 (2009)CrossRef Kim, T.K., Cipolla, R.: Canonical correlation analysis of video volume tensors for action categorization and detection. IEEE Trans. Pattern Anal. Mach. Intell. 31(8), 1415–1428 (2009)CrossRef
53.
Zurück zum Zitat Kong, W., Sessa, S., Cosentino, S., Zecca, M., Saito, K., Wang, C., Imtiaz, U., Lin, Z., Bartolomeo, L., Ishii, H., Ikai, T., Takanishi, A.: Development of a real-time IMU-based motion capture system for gait rehabilitation. In: Robotics and Biomimetics (ROBIO), IEEE International Conference on 2013, pp. 2100–2105 (2013) Kong, W., Sessa, S., Cosentino, S., Zecca, M., Saito, K., Wang, C., Imtiaz, U., Lin, Z., Bartolomeo, L., Ishii, H., Ikai, T., Takanishi, A.: Development of a real-time IMU-based motion capture system for gait rehabilitation. In: Robotics and Biomimetics (ROBIO), IEEE International Conference on 2013, pp. 2100–2105 (2013)
54.
Zurück zum Zitat Kratz, S., Back, M.: Towards accurate automatic segmentation of IMU-tracked motion gestures. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, ACM, pp. 1337–1342 (2015) Kratz, S., Back, M.: Towards accurate automatic segmentation of IMU-tracked motion gestures. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, ACM, pp. 1337–1342 (2015)
55.
Zurück zum Zitat Kwolek, B., Kepski, M.: Improving fall detection by the use of depth sensor and accelerometer. Neurocomputing 168, 637–645 (2015)CrossRef Kwolek, B., Kepski, M.: Improving fall detection by the use of depth sensor and accelerometer. Neurocomputing 168, 637–645 (2015)CrossRef
56.
Zurück zum Zitat Laptev, I.: On space-time interest points. Int. J. Comput. Vis. 64(2–3), 107–123 (2005)CrossRef Laptev, I.: On space-time interest points. Int. J. Comput. Vis. 64(2–3), 107–123 (2005)CrossRef
57.
Zurück zum Zitat Laptev, I., Marszałek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008, pp. 1–8. IEEE (2008) Laptev, I., Marszałek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008, pp. 1–8. IEEE (2008)
58.
Zurück zum Zitat Lara, D., Labrador, Ma.: A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 15(3), 1192–1209 (2013)CrossRef Lara, D., Labrador, Ma.: A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 15(3), 1192–1209 (2013)CrossRef
59.
Zurück zum Zitat Lei, J., Ren, X., Fox, D.: Fine-grained kitchen activity recognition using rgb-d. In: Proceedings of the 2012 ACM Conference on Ubiquitous Computing, pp. 208–211. ACM (2012) Lei, J., Ren, X., Fox, D.: Fine-grained kitchen activity recognition using rgb-d. In: Proceedings of the 2012 ACM Conference on Ubiquitous Computing, pp. 208–211. ACM (2012)
60.
Zurück zum Zitat Li, B.Y., Mian, A.S., Liu, W., Krishna, A.: Using kinect for face recognition under varying poses, expressions, illumination and disguise. In: 2013 IEEE Workshop on Applications of Computer Vision (WACV), pp. 186–192. IEEE (2013) Li, B.Y., Mian, A.S., Liu, W., Krishna, A.: Using kinect for face recognition under varying poses, expressions, illumination and disguise. In: 2013 IEEE Workshop on Applications of Computer Vision (WACV), pp. 186–192. IEEE (2013)
61.
Zurück zum Zitat Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3d points. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 9–14. IEEE (2010) Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3d points. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 9–14. IEEE (2010)
62.
Zurück zum Zitat Liang, B., Zheng, L.: Spatio-temporal pyramid cuboid matching for action recognition using depth maps. In: International Conference on Image Processing 2015 (ICIP 2015) (2015) Liang, B., Zheng, L.: Spatio-temporal pyramid cuboid matching for action recognition using depth maps. In: International Conference on Image Processing 2015 (ICIP 2015) (2015)
63.
Zurück zum Zitat Liu, J., Zhong, L., Wickramasuriya, J., Vasudevan, V.: uwave: accelerometer-based personalized gesture recognition and its applications. Pervasive Mobile Comput. 5(6), 657–675 (2009)CrossRef Liu, J., Zhong, L., Wickramasuriya, J., Vasudevan, V.: uwave: accelerometer-based personalized gesture recognition and its applications. Pervasive Mobile Comput. 5(6), 657–675 (2009)CrossRef
64.
Zurück zum Zitat Liu, K., Chen, C., Jafari, R., Kehtarnavaz, N.: Fusion of inertial and depth sensor data for robust hand gesture recognition. IEEE Sens. J. 14(6), 1898–1903 (2014)CrossRef Liu, K., Chen, C., Jafari, R., Kehtarnavaz, N.: Fusion of inertial and depth sensor data for robust hand gesture recognition. IEEE Sens. J. 14(6), 1898–1903 (2014)CrossRef
65.
Zurück zum Zitat Lombriser, C., Bharatula, N.B., Roggen, D., Tröster, G.: On-body activity recognition in a dynamic sensor network. In: Proceedings of the ICST 2nd International Conference on Body Area Networks, p. 17. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering) (2007) Lombriser, C., Bharatula, N.B., Roggen, D., Tröster, G.: On-body activity recognition in a dynamic sensor network. In: Proceedings of the ICST 2nd International Conference on Body Area Networks, p. 17. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering) (2007)
66.
Zurück zum Zitat Luinge, H.J., Veltink, P.H.: Measuring orientation of human body segments using miniature gyroscopes and accelerometers. Med. Biol. Eng. Comput. 43(2), 273–282 (2005)CrossRef Luinge, H.J., Veltink, P.H.: Measuring orientation of human body segments using miniature gyroscopes and accelerometers. Med. Biol. Eng. Comput. 43(2), 273–282 (2005)CrossRef
67.
Zurück zum Zitat Mace, D., Gao, W., Coskun, A.: Accelerometer-based hand gesture recognition using feature weighted Naïve Bayesian classifiers and dynamic time warping. In: Proceedings of the Companion Publication of the 2013 International Conference on Intelligent user interfaces companion, pp. 83–84. ACM (2013) Mace, D., Gao, W., Coskun, A.: Accelerometer-based hand gesture recognition using feature weighted Naïve Bayesian classifiers and dynamic time warping. In: Proceedings of the Companion Publication of the 2013 International Conference on Intelligent user interfaces companion, pp. 83–84. ACM (2013)
68.
Zurück zum Zitat Memon, M., Wagner, S.R., Pedersen, C.F., Beevi, F.H.A., Hansen, F.O.: Ambient assisted living healthcare frameworks, platforms, standards, and quality attributes. Sensors 14(3), 4312–4341 (2014)CrossRef Memon, M., Wagner, S.R., Pedersen, C.F., Beevi, F.H.A., Hansen, F.O.: Ambient assisted living healthcare frameworks, platforms, standards, and quality attributes. Sensors 14(3), 4312–4341 (2014)CrossRef
69.
Zurück zum Zitat Mogelmose, A., Bahnsen, C., Moeslund, T.B., Clapés, A., Escalera, S.: Tri-modal person re-identification with rgb, depth and thermal features. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 301–307. IEEE (2013) Mogelmose, A., Bahnsen, C., Moeslund, T.B., Clapés, A., Escalera, S.: Tri-modal person re-identification with rgb, depth and thermal features. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 301–307. IEEE (2013)
70.
Zurück zum Zitat Mubashir, M., Shao, L., Seed, L.: A survey on fall detection: principles and approaches. Neurocomputing 100, 144–152 (2013)CrossRef Mubashir, M., Shao, L., Seed, L.: A survey on fall detection: principles and approaches. Neurocomputing 100, 144–152 (2013)CrossRef
71.
Zurück zum Zitat Nait-Charif, H., McKenna, S.J.: Activity summarisation and fall detection in a supportive home environment. In: Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, vol. 4, pp. 323–326. IEEE (2004) Nait-Charif, H., McKenna, S.J.: Activity summarisation and fall detection in a supportive home environment. In: Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, vol. 4, pp. 323–326. IEEE (2004)
72.
Zurück zum Zitat Natarajan, P., Wu, S., Vitaladevuni, S., Zhuang, X., Tsakalidis, S., Park, U., Prasad, R., Natarajan, P.: Multimodal feature fusion for robust event detection in web videos. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1298–1305 (2012). https://doi.org/10.1109/CVPR.2012.6247814 Natarajan, P., Wu, S., Vitaladevuni, S., Zhuang, X., Tsakalidis, S., Park, U., Prasad, R., Natarajan, P.: Multimodal feature fusion for robust event detection in web videos. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1298–1305 (2012). https://​doi.​org/​10.​1109/​CVPR.​2012.​6247814
73.
Zurück zum Zitat Ni, B., Wang, G., Moulin, P.: Rgbd-hudaact: a color-depth video database for human daily activity recognition. In: Fossati, A., Gall, J., Grabner, H., Ren, X., Konolige, K. (eds.) Consumer Depth Cameras for Computer Vision, pp. 193–208. Springer, Berlin (2013)CrossRef Ni, B., Wang, G., Moulin, P.: Rgbd-hudaact: a color-depth video database for human daily activity recognition. In: Fossati, A., Gall, J., Grabner, H., Ren, X., Konolige, K. (eds.) Consumer Depth Cameras for Computer Vision, pp. 193–208. Springer, Berlin (2013)CrossRef
74.
Zurück zum Zitat Nikisins, O., Nasrollahi, K., Greitans, M., Moeslund, T.B.: Rgb-dt based face recognition. In: 2014 22nd International Conference on Pattern Recognition (ICPR), pp. 1716–1721. IEEE (2014) Nikisins, O., Nasrollahi, K., Greitans, M., Moeslund, T.B.: Rgb-dt based face recognition. In: 2014 22nd International Conference on Pattern Recognition (ICPR), pp. 1716–1721. IEEE (2014)
75.
Zurück zum Zitat Oliver, N., Garg, A., Horvitz, E.: Layered representations for learning and inferring office activity from multiple sensory channels. Comput. Vis. Image Underst. 96(2), 163–180 (2004)CrossRef Oliver, N., Garg, A., Horvitz, E.: Layered representations for learning and inferring office activity from multiple sensory channels. Comput. Vis. Image Underst. 96(2), 163–180 (2004)CrossRef
76.
Zurück zum Zitat Pardo, À., Clapés, A., Escalera, S., Pujol, O.: Actions in context: system for people with dementia. In: Nin, J., Villatoro, D. (eds.) Citizen in Sensor Networks, pp. 3–14. Springer, Berlin (2014)CrossRef Pardo, À., Clapés, A., Escalera, S., Pujol, O.: Actions in context: system for people with dementia. In: Nin, J., Villatoro, D. (eds.) Citizen in Sensor Networks, pp. 3–14. Springer, Berlin (2014)CrossRef
77.
Zurück zum Zitat Piyathilaka, L., Kodagoda, S.: Gaussian mixture based hmm for human daily activity recognition using 3d skeleton features. In: 2013 8th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 567–572. IEEE (2013) Piyathilaka, L., Kodagoda, S.: Gaussian mixture based hmm for human daily activity recognition using 3d skeleton features. In: 2013 8th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 567–572. IEEE (2013)
78.
Zurück zum Zitat Pylvänäinen, T.: Accelerometer based gesture recognition using continuous Hmms. In: Pattern Recognition and Image Analysis, pp. 639–646. Springer (2005) Pylvänäinen, T.: Accelerometer based gesture recognition using continuous Hmms. In: Pattern Recognition and Image Analysis, pp. 639–646. Springer (2005)
79.
Zurück zum Zitat Rashidi, P., Cook, D.J.: Keeping the resident in the loop: adapting the smart home to the user. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 39(5), 949–959 (2009)CrossRef Rashidi, P., Cook, D.J.: Keeping the resident in the loop: adapting the smart home to the user. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 39(5), 949–959 (2009)CrossRef
80.
Zurück zum Zitat Reyes, M., Domínguez, G., Escalera, S.: Featureweighting in dynamic timewarping for gesture recognition in depth data. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1182–1188. IEEE (2011) Reyes, M., Domínguez, G., Escalera, S.: Featureweighting in dynamic timewarping for gesture recognition in depth data. In: 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 1182–1188. IEEE (2011)
81.
Zurück zum Zitat Ribeiro, P.C., Santos-Victor, J.: Human activity recognition from video: modeling, feature selection and classification architecture. In: Proceedings of International Workshop on Human Activity Recognition and Modelling, pp. 61–78. Citeseer (2005) Ribeiro, P.C., Santos-Victor, J.: Human activity recognition from video: modeling, feature selection and classification architecture. In: Proceedings of International Workshop on Human Activity Recognition and Modelling, pp. 61–78. Citeseer (2005)
83.
Zurück zum Zitat Rodriguez, M.D., Ahmed, J., Shah, M.: Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008, pp. 1–8. IEEE (2008) Rodriguez, M.D., Ahmed, J., Shah, M.: Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008, pp. 1–8. IEEE (2008)
84.
Zurück zum Zitat Sadanand, S., Corso, J.J.: Action bank: A high-level representation of activity in video. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1234–1241. IEEE (2012) Sadanand, S., Corso, J.J.: Action bank: A high-level representation of activity in video. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1234–1241. IEEE (2012)
85.
Zurück zum Zitat Saha, S., Pal, M., Konar, A., Janarthanan, R.: Neural network based gesture recognition for elderly health care using kinect sensor. In: Swarm, Evolutionary, and Memetic Computing, pp. 376–386. Springer (2013) Saha, S., Pal, M., Konar, A., Janarthanan, R.: Neural network based gesture recognition for elderly health care using kinect sensor. In: Swarm, Evolutionary, and Memetic Computing, pp. 376–386. Springer (2013)
86.
Zurück zum Zitat Schindler, K., Van Gool, L.: Action snippets: How many frames does human action recognition require? In: IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008, pp. 1–8. IEEE (2008) Schindler, K., Van Gool, L.: Action snippets: How many frames does human action recognition require? In: IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008, pp. 1–8. IEEE (2008)
87.
Zurück zum Zitat Schüldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local svm approach. In: . Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, vol. 3, pp. 32–36. IEEE (2004) Schüldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local svm approach. In: . Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, vol. 3, pp. 32–36. IEEE (2004)
88.
Zurück zum Zitat Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., Cook, M., Moore, R.: Real-time human pose recognition in parts from single depth images. Commun. ACM 56(1), 116–124 (2013)CrossRef Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A., Cook, M., Moore, R.: Real-time human pose recognition in parts from single depth images. Commun. ACM 56(1), 116–124 (2013)CrossRef
89.
Zurück zum Zitat Snoek, C.G., Worring, M., Smeulders, A.W.: Early versus late fusion in semantic video analysis. In: Proceedings of the 13th Annual ACM International Conference on Multimedia, pp. 399–402. ACM (2005) Snoek, C.G., Worring, M., Smeulders, A.W.: Early versus late fusion in semantic video analysis. In: Proceedings of the 13th Annual ACM International Conference on Multimedia, pp. 399–402. ACM (2005)
90.
Zurück zum Zitat Sung, J., Ponce, C., Selman, B., Saxena, A.: Unstructured human activity detection from rgbd images. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 842–849. IEEE (2012) Sung, J., Ponce, C., Selman, B., Saxena, A.: Unstructured human activity detection from rgbd images. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 842–849. IEEE (2012)
91.
Zurück zum Zitat Tang, K., Fei-Fei, L., Koller, D.: Learning latent temporal structure for complex event detection. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1250–1257. IEEE (2012) Tang, K., Fei-Fei, L., Koller, D.: Learning latent temporal structure for complex event detection. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1250–1257. IEEE (2012)
92.
Zurück zum Zitat Tang, S., Wang, X., Lv, X., Han, T.X., Keller, J., He, Z., Skubic, M., Lao, S.: Histogram of oriented normal vectors for object recognition with a depth sensor. In: Computer Vision–ACCV 2012, pp. 525–538. Springer (2012) Tang, S., Wang, X., Lv, X., Han, T.X., Keller, J., He, Z., Skubic, M., Lao, S.: Histogram of oriented normal vectors for object recognition with a depth sensor. In: Computer Vision–ACCV 2012, pp. 525–538. Springer (2012)
93.
Zurück zum Zitat Ullah, M.M., Parizi, S.N., Laptev, I.: Improving bag-of-features action recognition with non-local cues. In: BMVC, vol. 10, pp. 95–1. Citeseer (2010) Ullah, M.M., Parizi, S.N., Laptev, I.: Improving bag-of-features action recognition with non-local cues. In: BMVC, vol. 10, pp. 95–1. Citeseer (2010)
94.
Zurück zum Zitat Van Hoof, J., Kort, H., Rutten, P., Duijnstee, M.: Ageing-in-place with the use of ambient intelligence technology: perspectives of older users. Int. J. Med. Inform. 80(5), 310–331 (2011)CrossRef Van Hoof, J., Kort, H., Rutten, P., Duijnstee, M.: Ageing-in-place with the use of ambient intelligence technology: perspectives of older users. Int. J. Med. Inform. 80(5), 310–331 (2011)CrossRef
95.
Zurück zum Zitat Vedaldi, A., Zisserman, A.: Efficient additive kernels via explicit feature maps. IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 480–492 (2012)CrossRef Vedaldi, A., Zisserman, A.: Efficient additive kernels via explicit feature maps. IEEE Trans. Pattern Anal. Mach. Intell. 34(3), 480–492 (2012)CrossRef
96.
97.
Zurück zum Zitat Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Action recognition by dense trajectories. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3169–3176. IEEE (2011) Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Action recognition by dense trajectories. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3169–3176. IEEE (2011)
98.
Zurück zum Zitat Wang, H., Schmid, C.: Action recognition with improved trajectories. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 3551–3558. IEEE (2013) Wang, H., Schmid, C.: Action recognition with improved trajectories. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 3551–3558. IEEE (2013)
99.
Zurück zum Zitat Weinzaepfel, P., Harchaoui, Z., Schmid, C.: Learning to track for spatio-temporal action localization. (2015). arXiv preprint arXiv:1506.01929 Weinzaepfel, P., Harchaoui, Z., Schmid, C.: Learning to track for spatio-temporal action localization. (2015). arXiv preprint arXiv:​1506.​01929
100.
Zurück zum Zitat World Health Organization, Alzheimer’s Disease International: Dementia: a public health priority. World Health Organization, Geneva (2012) World Health Organization, Alzheimer’s Disease International: Dementia: a public health priority. World Health Organization, Geneva (2012)
101.
Zurück zum Zitat Wu, D., Zhu, F., Shao, L.: One shot learning gesture recognition from rgbd images. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 7–12. IEEE (2012) Wu, D., Zhu, F., Shao, L.: One shot learning gesture recognition from rgbd images. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 7–12. IEEE (2012)
102.
Zurück zum Zitat Wu, J., Osuntogun, A., Choudhury, T., Philipose, M., Rehg, J.M.: A scalable approach to activity recognition based on object use. In: IEEE 11th International Conference on Computer Vision, 2007. ICCV 2007, pp. 1–8. IEEE (2007) Wu, J., Osuntogun, A., Choudhury, T., Philipose, M., Rehg, J.M.: A scalable approach to activity recognition based on object use. In: IEEE 11th International Conference on Computer Vision, 2007. ICCV 2007, pp. 1–8. IEEE (2007)
103.
Zurück zum Zitat Xiao, Y., Zhao, G., Yuan, J., Thalmann, D.: Activity recognition in unconstrained rgb-d video using 3d trajectories. In: SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence, SA ’14, pp. 4:1–4:4. ACM, New York, NY, USA (2014). https://doi.org/10.1145/2668956.2668961 Xiao, Y., Zhao, G., Yuan, J., Thalmann, D.: Activity recognition in unconstrained rgb-d video using 3d trajectories. In: SIGGRAPH Asia 2014 Autonomous Virtual Humans and Social Robot for Telepresence, SA ’14, pp. 4:1–4:4. ACM, New York, NY, USA (2014). https://​doi.​org/​10.​1145/​2668956.​2668961
104.
Zurück zum Zitat Xu, Z., Yang, Y., Hauptmann, A.G.: A discriminative cnn video representation for event detection (2014). arXiv preprint arXiv:1411.4006 Xu, Z., Yang, Y., Hauptmann, A.G.: A discriminative cnn video representation for event detection (2014). arXiv preprint arXiv:​1411.​4006
105.
Zurück zum Zitat Yang, J.Y., Wang, J.S., Chen, Y.P.: Using acceleration measurements for activity recognition: an effective learning algorithm for constructing neural classifiers. Pattern Recognit. Lett. 29(16), 2213–2220 (2008)CrossRef Yang, J.Y., Wang, J.S., Chen, Y.P.: Using acceleration measurements for activity recognition: an effective learning algorithm for constructing neural classifiers. Pattern Recognit. Lett. 29(16), 2213–2220 (2008)CrossRef
106.
Zurück zum Zitat Zhang, B., Jiang, S., Wei, D., Marschollek, M., Zhang, W.: State of the art in gait analysis using wearable sensors for healthcare applications. In: 2012 IEEE/ACIS 11th International Conference on Computer and Information Science (ICIS), pp. 213–218. IEEE (2012) Zhang, B., Jiang, S., Wei, D., Marschollek, M., Zhang, W.: State of the art in gait analysis using wearable sensors for healthcare applications. In: 2012 IEEE/ACIS 11th International Conference on Computer and Information Science (ICIS), pp. 213–218. IEEE (2012)
107.
Zurück zum Zitat Zhang, C., Tian, Y.: Rgb-d camera-based daily living activity recognition. J. Comput. Vis. Image Process. 2(4), 12 (2012) Zhang, C., Tian, Y.: Rgb-d camera-based daily living activity recognition. J. Comput. Vis. Image Process. 2(4), 12 (2012)
108.
Zurück zum Zitat Zhao, Y., Liu, Z., Yang, L., Cheng, H.: Combing rgb and depth map features for human activity recognition. In: Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific, pp. 1–4. IEEE (2012) Zhao, Y., Liu, Z., Yang, L., Cheng, H.: Combing rgb and depth map features for human activity recognition. In: Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), 2012 Asia-Pacific, pp. 1–4. IEEE (2012)
109.
Zurück zum Zitat Zhou, F., Jiao, J., Chen, S., Zhang, D.: A case-driven ambient intelligence system for elderly in-home assistance applications. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 41(2), 179–189 (2011)CrossRef Zhou, F., Jiao, J., Chen, S., Zhang, D.: A case-driven ambient intelligence system for elderly in-home assistance applications. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 41(2), 179–189 (2011)CrossRef
110.
Zurück zum Zitat Zhu, C., Sheng, W.: Multi-sensor fusion for human daily activity recognition in robot-assisted living. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, pp. 303–304. ACM (2009) Zhu, C., Sheng, W.: Multi-sensor fusion for human daily activity recognition in robot-assisted living. In: Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction, pp. 303–304. ACM (2009)
Metadaten
Titel
Action detection fusing multiple Kinects and a WIMU: an application to in-home assistive technology for the elderly
verfasst von
Albert Clapés
Àlex Pardo
Oriol Pujol Vila
Sergio Escalera
Publikationsdatum
03.05.2018
Verlag
Springer Berlin Heidelberg
Erschienen in
Machine Vision and Applications / Ausgabe 5/2018
Print ISSN: 0932-8092
Elektronische ISSN: 1432-1769
DOI
https://doi.org/10.1007/s00138-018-0931-1

Weitere Artikel der Ausgabe 5/2018

Machine Vision and Applications 5/2018 Zur Ausgabe