Skip to main content
Erschienen in: Machine Vision and Applications 1-2/2017

10.12.2016 | Original Paper

Online human moves recognition through discriminative key poses and speed-aware action graphs

verfasst von: Thales Vieira, Romain Faugeroux, Dimas Martínez, Thomas Lewiner

Erschienen in: Machine Vision and Applications | Ausgabe 1-2/2017

Einloggen

Aktivieren Sie unsere intelligente Suche um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Recognizing user-defined moves serves a large number of applications including sport monitoring, virtual reality or natural user interfaces (NUI). However, many of the efficient human move recognition methods are still limited to specific situations, such as straightforward NUI gestures or everyday human actions. In particular, most methods depend on a prior segmentation of recordings to both train and recognize moves. This segmentation step is generally performed manually or based on heuristics such as neutral poses or short pauses, limiting the range of applications. Besides, speed is generally not considered as a criterion to distinguish moves. We present an approach composed of a simplified move training phase that requires minimal user intervention, together with a novel online method to robustly recognize moves online from unsegmented data without requiring any transitional pauses or neutral poses, and additionally considering human move speed. Trained gestures are automatically segmented in real time by a curvature-based method that detects small pauses during a training session. A set of most discriminant key poses between different moves is also extracted in real time, optimizing the number of key poses. All together, this semi-supervised learning approach only requires continuous move performances from the user with small pauses. Key pose transitions and moves execution speeds are used as input to a novel human move recognition algorithm that recognizes unsegmented moves online, achieving high robustness and very low latency in our experiments, while also effective in distinguishing moves that differ only in speed.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
1.
Zurück zum Zitat Althloothi, S., Mahoor, M.H., Zhang, X., Voyles, R.M.: Human activity recognition using multi-features and multiple kernel learning. Pattern Recognit. 47(5), 1800–1812 (2014)CrossRef Althloothi, S., Mahoor, M.H., Zhang, X., Voyles, R.M.: Human activity recognition using multi-features and multiple kernel learning. Pattern Recognit. 47(5), 1800–1812 (2014)CrossRef
2.
Zurück zum Zitat Bloom, V., Makris, D., Argyriou, V.: Clustered spatio-temporal manifolds for online action recognition. In: Pattern Recognition (ICPR), IEEE 2014 22nd International Conference on, pp. 3963–3968 (2014) Bloom, V., Makris, D., Argyriou, V.: Clustered spatio-temporal manifolds for online action recognition. In: Pattern Recognition (ICPR), IEEE 2014 22nd International Conference on, pp. 3963–3968 (2014)
3.
Zurück zum Zitat Bobick, A., Davis, J.: The recognition of human movement using temporal templates. TPAMI 23 (2001) Bobick, A., Davis, J.: The recognition of human movement using temporal templates. TPAMI 23 (2001)
4.
Zurück zum Zitat Cao, L., Liu, Z., Huang, T.: Cross-dataset action detection. In: CVPR, pp. 1998–2005 (2010) Cao, L., Liu, Z., Huang, T.: Cross-dataset action detection. In: CVPR, pp. 1998–2005 (2010)
5.
Zurück zum Zitat Cardinaux, F., Bhowmik, D., Abhayaratne, C., Hawley, M.S.: Video based technology for ambient assisted living: A review of the literature. J. Ambient Intell. Smart Environ. 3(3), 253–269 (2011) Cardinaux, F., Bhowmik, D., Abhayaratne, C., Hawley, M.S.: Video based technology for ambient assisted living: A review of the literature. J. Ambient Intell. Smart Environ. 3(3), 253–269 (2011)
6.
Zurück zum Zitat Chaaraoui, A.A., Flórez-Revuelta, F.: Continuous Human Action Recognition in Ambient Assisted Living Scenarios. Springer International Publishing, Cham (2015)CrossRef Chaaraoui, A.A., Flórez-Revuelta, F.: Continuous Human Action Recognition in Ambient Assisted Living Scenarios. Springer International Publishing, Cham (2015)CrossRef
7.
Zurück zum Zitat Chen, D.Y., Shih, S.W., Liao, H.Y.M.: Human action recognition using 2-d spatio-temporal templates. In: 2007 IEEE International Conference on Multimedia and Expo, pp. 667–670. IEEE (2007) Chen, D.Y., Shih, S.W., Liao, H.Y.M.: Human action recognition using 2-d spatio-temporal templates. In: 2007 IEEE International Conference on Multimedia and Expo, pp. 667–670. IEEE (2007)
9.
Zurück zum Zitat Desmond, M., Collet, P., Marsh, P., O’Shaughnessy, M.: Gestures: Their origins and distribution. Cape (1979) Desmond, M., Collet, P., Marsh, P., O’Shaughnessy, M.: Gestures: Their origins and distribution. Cape (1979)
10.
Zurück zum Zitat Devanne, M., Wannous, H., Berretti, S., Pala, P., Daoudi, M., Del Bimbo, A.: Space-time pose representation for 3d human action recognition. In: ICIAP, pp. 456–464 (2013) Devanne, M., Wannous, H., Berretti, S., Pala, P., Daoudi, M., Del Bimbo, A.: Space-time pose representation for 3d human action recognition. In: ICIAP, pp. 456–464 (2013)
11.
Zurück zum Zitat Do Carmo, M.: Differential geometry of curves and surfaces. Pearson, London (1976)MATH Do Carmo, M.: Differential geometry of curves and surfaces. Pearson, London (1976)MATH
12.
Zurück zum Zitat Ellis, C., Masood, S.Z., Tappen, M.F., La Viola Joseph, J.J., Sukthankar, R.: Exploring the trade-off between accuracy and observational latency in action recognition. J. Vis. Commun. Image Represent. 101(3), 420–436 (2013) Ellis, C., Masood, S.Z., Tappen, M.F., La Viola Joseph, J.J., Sukthankar, R.: Exploring the trade-off between accuracy and observational latency in action recognition. J. Vis. Commun. Image Represent. 101(3), 420–436 (2013)
13.
Zurück zum Zitat Faugeroux, R., Vieira, T., Martinez, D., Lewiner, T.: Simplified training for gesture recognition. In: Sibgrapi, pp. 133–140 (2014) Faugeroux, R., Vieira, T., Martinez, D., Lewiner, T.: Simplified training for gesture recognition. In: Sibgrapi, pp. 133–140 (2014)
14.
Zurück zum Zitat Forbes, K., Fiu, E.: An efficient search algorithm for motion data using weighted PCA. In: SCA, pp. 67–76 (2005) Forbes, K., Fiu, E.: An efficient search algorithm for motion data using weighted PCA. In: SCA, pp. 67–76 (2005)
15.
Zurück zum Zitat Fothergill, S., Mentis, H., Kohli, P., Nowozin, S.: Instructing people for training gestural interactive systems. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’12, pp. 1737–1746. ACM, New York, NY, USA (2012) Fothergill, S., Mentis, H., Kohli, P., Nowozin, S.: Instructing people for training gestural interactive systems. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’12, pp. 1737–1746. ACM, New York, NY, USA (2012)
16.
Zurück zum Zitat Gong, D., Medioni, G., Zhu, S., Zhao, X.: Kernelized temporal cut for online temporal segmentation and recognition. In: Proceedings of the 12th European Conference on Computer Vision - Volume Part III. ECCV’12, pp. 229–243. Springer, Berlin, Heidelberg (2012) Gong, D., Medioni, G., Zhu, S., Zhao, X.: Kernelized temporal cut for online temporal segmentation and recognition. In: Proceedings of the 12th European Conference on Computer Vision - Volume Part III. ECCV’12, pp. 229–243. Springer, Berlin, Heidelberg (2012)
17.
Zurück zum Zitat Kovashka, A., Grauman, K.: Learning a hierarchy of discriminative space-time neighborhood features for human action recognition. In: CVPR, pp. 2046–2053 (2010) Kovashka, A., Grauman, K.: Learning a hierarchy of discriminative space-time neighborhood features for human action recognition. In: CVPR, pp. 2046–2053 (2010)
18.
Zurück zum Zitat Lan, R., Sun, H.: Automated human motion segmentation via motion regularities. J. Vis. Commun. Image Represent. 31, 35–53 (2015) Lan, R., Sun, H.: Automated human motion segmentation via motion regularities. J. Vis. Commun. Image Represent. 31, 35–53 (2015)
19.
Zurück zum Zitat Laptev, I., Lindeberg, T.: Space-time interest points. In: ICCV 1, 432–439 (2003) Laptev, I., Lindeberg, T.: Space-time interest points. In: ICCV 1, 432–439 (2003)
20.
Zurück zum Zitat LaViola, J.: 3d gestural interaction: The state of the field. ISRN Artificial Intelligence p. 514641 (2013) LaViola, J.: 3d gestural interaction: The state of the field. ISRN Artificial Intelligence p. 514641 (2013)
21.
Zurück zum Zitat Lewiner, T., Gomes, J., Lopes, H., Craizer, M.: Curvature and torsion estimators based on parametric curve fitting. Comput. Gr. 29(5), 641–655 (2005)CrossRef Lewiner, T., Gomes, J., Lopes, H., Craizer, M.: Curvature and torsion estimators based on parametric curve fitting. Comput. Gr. 29(5), 641–655 (2005)CrossRef
22.
Zurück zum Zitat Li, W., Zhang, Z., Liu, Z.: Expandable data-driven graphical modeling of human actions based on salient postures. Circuits Syst. Video Technol. 18(11), 1499–1510 (2008)CrossRef Li, W., Zhang, Z., Liu, Z.: Expandable data-driven graphical modeling of human actions based on salient postures. Circuits Syst. Video Technol. 18(11), 1499–1510 (2008)CrossRef
23.
Zurück zum Zitat Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3D points. In: CVPR W. Human Communicative Behavior Analysis, pp. 9–14 (2010) Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3D points. In: CVPR W. Human Communicative Behavior Analysis, pp. 9–14 (2010)
25.
Zurück zum Zitat Lo Presti, L., La Cascia, M.: 3d skeleton-based human action classification. Comput. Gr. 53, 130–147 (2016) Lo Presti, L., La Cascia, M.: 3d skeleton-based human action classification. Comput. Gr. 53, 130–147 (2016)
26.
Zurück zum Zitat Lv, F., Nevatia, R.: Single view human action recognition using key pose matching and Viterbi path searching. In: CVPR, pp. 1–8 (2007) Lv, F., Nevatia, R.: Single view human action recognition using key pose matching and Viterbi path searching. In: CVPR, pp. 1–8 (2007)
28.
Zurück zum Zitat Miranda, L., Vieira, T., Martinez, D., Lewiner, T., Vieira, A.W., Campos, M.F.M.: Online gesture recognition from pose kernel learning and decision forests. Comput. Gr. 39, 65–73 (2014) Miranda, L., Vieira, T., Martinez, D., Lewiner, T., Vieira, A.W., Campos, M.F.M.: Online gesture recognition from pose kernel learning and decision forests. Comput. Gr. 39, 65–73 (2014)
29.
Zurück zum Zitat Müller, M., Baak, A., Seidel, H.P.: Efficient and robust annotation of motion capture data. In: SCA, pp. 17–26 (2009) Müller, M., Baak, A., Seidel, H.P.: Efficient and robust annotation of motion capture data. In: SCA, pp. 17–26 (2009)
30.
Zurück zum Zitat Müller, M., Röder, T.: Motion templates for automatic classification and retrieval of motion capture data. In: SCA, pp. 137–146 (2006) Müller, M., Röder, T.: Motion templates for automatic classification and retrieval of motion capture data. In: SCA, pp. 137–146 (2006)
31.
Zurück zum Zitat Niebles, J.C., Chen, C.W., Fei-Fei, L.: Modeling temporal structure of decomposable motion segments for activity classification. In: ECCV, pp. 392–405 (2010) Niebles, J.C., Chen, C.W., Fei-Fei, L.: Modeling temporal structure of decomposable motion segments for activity classification. In: ECCV, pp. 392–405 (2010)
32.
Zurück zum Zitat Nowozin, S., Shotton, J.: Action points: A representation for low-latency online human action recognition. Tech. Rep. MSR-TR-2012-68, Microsoft Research Cambridge (2012) Nowozin, S., Shotton, J.: Action points: A representation for low-latency online human action recognition. Tech. Rep. MSR-TR-2012-68, Microsoft Research Cambridge (2012)
33.
Zurück zum Zitat Padilla-López, J.R., Chaaraoui, A.A., Flórez-Revuelta, F.: A discussion on the validation tests employed to compare human action recognition methods using the MSR action3d dataset. CoRR (2015) Padilla-López, J.R., Chaaraoui, A.A., Flórez-Revuelta, F.: A discussion on the validation tests employed to compare human action recognition methods using the MSR action3d dataset. CoRR (2015)
34.
Zurück zum Zitat Poppe, R.: A survey on vision-based human action recognition. Comput. Gr. 28(6), 976–990 (2010) Poppe, R.: A survey on vision-based human action recognition. Comput. Gr. 28(6), 976–990 (2010)
35.
Zurück zum Zitat Raptis, M., Kirovski, D., Hoppe, H.: Real-time classification of dance gestures from skeleton animation. In: SCA, pp. 147–156 (2011) Raptis, M., Kirovski, D., Hoppe, H.: Real-time classification of dance gestures from skeleton animation. In: SCA, pp. 147–156 (2011)
36.
Zurück zum Zitat Shotton, J., Fitzgibbon, A.W., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: CVPR, pp. 1297–1304 (2011) Shotton, J., Fitzgibbon, A.W., Cook, M., Sharp, T., Finocchio, M., Moore, R., Kipman, A., Blake, A.: Real-time human pose recognition in parts from single depth images. In: CVPR, pp. 1297–1304 (2011)
37.
Zurück zum Zitat Slama, R., Wannous, H., Daoudi, M.: Grassmannian representation of motion depth for 3d human gesture and action recognition. In: ICPR, pp. 3499–3504 (2014) Slama, R., Wannous, H., Daoudi, M.: Grassmannian representation of motion depth for 3d human gesture and action recognition. In: ICPR, pp. 3499–3504 (2014)
38.
Zurück zum Zitat Slama, R., Wannous, H., Daoudi, M., Srivastava, A.: Accurate 3d action recognition using learning on the Grassmann manifold. Comput. Gr. 48(2), 556–567 (2015) Slama, R., Wannous, H., Daoudi, M., Srivastava, A.: Accurate 3d action recognition using learning on the Grassmann manifold. Comput. Gr. 48(2), 556–567 (2015)
39.
Zurück zum Zitat Sun, J., Wu, X., Yan, S., Cheong, L., Chua, T., Li, J.: Hierarchical spatio-temporal context modeling for action recognition. In: CVPR, pp. 2004–2011 (2009) Sun, J., Wu, X., Yan, S., Cheong, L., Chua, T., Li, J.: Hierarchical spatio-temporal context modeling for action recognition. In: CVPR, pp. 2004–2011 (2009)
40.
Zurück zum Zitat Vemulapalli, R., Arrate, F., Chellappa, R.: Human action recognition by representing 3d skeletons as points in a lie group. In: CVPR, pp. 588–595 (2014) Vemulapalli, R., Arrate, F., Chellappa, R.: Human action recognition by representing 3d skeletons as points in a lie group. In: CVPR, pp. 588–595 (2014)
41.
Zurück zum Zitat Vieira, A.W., Lewiner, T., Schwartz, W., Campos, M.F.M.: Distance matrices as invariant features for classifying mocap data. In: ICPR, pp. 2934–2937 (2012) Vieira, A.W., Lewiner, T., Schwartz, W., Campos, M.F.M.: Distance matrices as invariant features for classifying mocap data. In: ICPR, pp. 2934–2937 (2012)
42.
Zurück zum Zitat Vieira, A.W., Nascimento, E.R., Oliveira, G.L., Liu, Z., Campos, M.F.: On the improvement of human action recognition from depth map sequences using Space-Time Occupancy Patterns. Comput. Gr. 36(15), 221–227 (2014) Vieira, A.W., Nascimento, E.R., Oliveira, G.L., Liu, Z., Campos, M.F.: On the improvement of human action recognition from depth map sequences using Space-Time Occupancy Patterns. Comput. Gr. 36(15), 221–227 (2014)
43.
Zurück zum Zitat Vieira, A.W., Nascimento, E.R., Oliveira, G.L., Liu, Z., Campos, M.M.: STOP: Space-time occupancy patterns for 3d action recognition from depth map sequences. In: CIARP, pp. 252–259 (2012) Vieira, A.W., Nascimento, E.R., Oliveira, G.L., Liu, Z., Campos, M.M.: STOP: Space-time occupancy patterns for 3d action recognition from depth map sequences. In: CIARP, pp. 252–259 (2012)
45.
Zurück zum Zitat Vitaladevuni, S.N., Kellokumpu, V., Davis, L.S.: Action recognition using ballistic dynamics. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008) Vitaladevuni, S.N., Kellokumpu, V., Davis, L.S.: Action recognition using ballistic dynamics. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)
46.
Zurück zum Zitat Wang, J., Liu, Z., Chorowski, J., Chen, Z., Wu, Y.: Robust 3d action recognition with random occupancy patterns. In: ECCV, pp. 872–885 (2012) Wang, J., Liu, Z., Chorowski, J., Chen, Z., Wu, Y.: Robust 3d action recognition with random occupancy patterns. In: ECCV, pp. 872–885 (2012)
47.
Zurück zum Zitat Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: CVPR, pp. 1290–1297 (2012) Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: CVPR, pp. 1290–1297 (2012)
48.
Zurück zum Zitat Weinland, D., Boyer, E.: Action recognition using exemplar-based embedding. In: CVPR, pp. 1–7 (2005) Weinland, D., Boyer, E.: Action recognition using exemplar-based embedding. In: CVPR, pp. 1–7 (2005)
49.
Zurück zum Zitat Xia, L., Chen, C.C., Aggarwal, J.: View invariant human action recognition using histograms of 3d joints. In: CVPR W. on Human Activity Understanding from 3D Data, pp. 20–27 (2012) Xia, L., Chen, C.C., Aggarwal, J.: View invariant human action recognition using histograms of 3d joints. In: CVPR W. on Human Activity Understanding from 3D Data, pp. 20–27 (2012)
50.
Zurück zum Zitat Yang, X., Tian, Y.: Eigenjoints-based action recognition using naïve-Bayes-nearest-neighbor. In: CVPR W. On Human Activity Understanding from 3D Data, pp. 14–19 (2012) Yang, X., Tian, Y.: Eigenjoints-based action recognition using naïve-Bayes-nearest-neighbor. In: CVPR W. On Human Activity Understanding from 3D Data, pp. 14–19 (2012)
51.
Zurück zum Zitat Yang, X., Zhang, C., Tian, Y.: Recognizing actions using depth motion maps-based histograms of oriented gradients. In: Multimedia, pp. 1057–1060 (2012) Yang, X., Zhang, C., Tian, Y.: Recognizing actions using depth motion maps-based histograms of oriented gradients. In: Multimedia, pp. 1057–1060 (2012)
52.
Zurück zum Zitat Ye, J., Li, K., Qi, G.J., Hua, K.A.: Temporal order-preserving dynamic quantization for human action recognition from multimodal sensor streams. In: Proceedings of the 5th ACM on International Conference on Multimedia Retrieval, ICMR ’15, pp. 99–106 (2015) Ye, J., Li, K., Qi, G.J., Hua, K.A.: Temporal order-preserving dynamic quantization for human action recognition from multimodal sensor streams. In: Proceedings of the 5th ACM on International Conference on Multimedia Retrieval, ICMR ’15, pp. 99–106 (2015)
53.
Zurück zum Zitat Yu, G., Liu, Z., Yuan, J.: Discriminative orderlet mining for real-time recognition of human-object interaction. In: ACCV, Lecture Notes in Computer Science, pp. 50–65 (2014) Yu, G., Liu, Z., Yuan, J.: Discriminative orderlet mining for real-time recognition of human-object interaction. In: ACCV, Lecture Notes in Computer Science, pp. 50–65 (2014)
54.
Zurück zum Zitat Zhao, X., Li, X., Pang, C., Zhu, X., Sheng, Q.Z.: Online human gesture recognition from motion data streams. In: Proceedings of the 21st ACM International Conference on Multimedia. MM ’13, pp. 23–32. ACM, New York, NY, USA (2013) Zhao, X., Li, X., Pang, C., Zhu, X., Sheng, Q.Z.: Online human gesture recognition from motion data streams. In: Proceedings of the 21st ACM International Conference on Multimedia. MM ’13, pp. 23–32. ACM, New York, NY, USA (2013)
55.
Zurück zum Zitat Zhou, F., De la Torre, F., Hodgins, J.K.: Hierarchical aligned cluster analysis for temporal clustering of human motion. IEEE Trans. Pattern Anal. Mach. Intell. 35(3), 582–596 (2013)CrossRef Zhou, F., De la Torre, F., Hodgins, J.K.: Hierarchical aligned cluster analysis for temporal clustering of human motion. IEEE Trans. Pattern Anal. Mach. Intell. 35(3), 582–596 (2013)CrossRef
56.
Zurück zum Zitat Zhu, Y., Chen, W., Guo, G.: Evaluating spatiotemporal interest point features for depth-based action recognition. Image Vis. Comput. 32(8), 453–464 (2014)CrossRef Zhu, Y., Chen, W., Guo, G.: Evaluating spatiotemporal interest point features for depth-based action recognition. Image Vis. Comput. 32(8), 453–464 (2014)CrossRef
57.
Zurück zum Zitat Zhu, Y., Chen, W., Guo, G.: Fusing multiple features for depth-based action recognition. Image Vis. Comput. 6(2), 18:1–18:20 (2015) Zhu, Y., Chen, W., Guo, G.: Fusing multiple features for depth-based action recognition. Image Vis. Comput. 6(2), 18:1–18:20 (2015)
Metadaten
Titel
Online human moves recognition through discriminative key poses and speed-aware action graphs
verfasst von
Thales Vieira
Romain Faugeroux
Dimas Martínez
Thomas Lewiner
Publikationsdatum
10.12.2016
Verlag
Springer Berlin Heidelberg
Erschienen in
Machine Vision and Applications / Ausgabe 1-2/2017
Print ISSN: 0932-8092
Elektronische ISSN: 1432-1769
DOI
https://doi.org/10.1007/s00138-016-0818-y

Weitere Artikel der Ausgabe 1-2/2017

Machine Vision and Applications 1-2/2017 Zur Ausgabe