Skip to main content
Top

2018 | OriginalPaper | Chapter

11. Context-Aware Human-Robot Collaborative Assembly

Authors : Lihui Wang, Xi Vincent Wang

Published in: Cloud-Based Cyber-Physical Systems in Manufacturing

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In human-robot collaborative manufacturing, industrial robots would work alongside the human workers who jointly perform the assigned tasks. Recent research work revealed that recognised human motions could be used as input for industrial robots control. However, the human-robot collaboration team still cannot work symbiotically. In response to the requirement, this chapter explores the potential of establishing context awareness between a human worker and an industrial robot for human-robot collaborative assembly. The context awareness between the human worker and the industrial robot is established by applying gesture recognition, human motion recognition and Augmented Reality (AR) based worker instruction technologies. Such a system works in a cyber-physical environment and is demonstrated by case studies.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference J. Krüger, T.K. Lien, A. Verl, Cooperation of human and machines in assembly lines. CIRP Ann. Technol. 58, 628–646 (2009)CrossRef J. Krüger, T.K. Lien, A. Verl, Cooperation of human and machines in assembly lines. CIRP Ann. Technol. 58, 628–646 (2009)CrossRef
2.
go back to reference S.A. Green, M. Billinghurst, X. Chen, G.J. Chase, Human-robot collaboration: a literature review and augmented reality approach in design. Int. J. Adv. Robot. Syst. 1–18 (2008) S.A. Green, M. Billinghurst, X. Chen, G.J. Chase, Human-robot collaboration: a literature review and augmented reality approach in design. Int. J. Adv. Robot. Syst. 1–18 (2008)
3.
go back to reference P.R. Cohen, H.J. Levesque, Teamwork. Nous 487–512 (1991) P.R. Cohen, H.J. Levesque, Teamwork. Nous 487–512 (1991)
4.
go back to reference L.S. Vygotsky, Mind in society: the development of higher psychological processes (Harvard University Press, 1980) L.S. Vygotsky, Mind in society: the development of higher psychological processes (Harvard University Press, 1980)
5.
go back to reference P.R. Cohen, H.J. Levesque, persistence, intention, and commitment. Reason. About Actions Plans 297–340 (1990) P.R. Cohen, H.J. Levesque, persistence, intention, and commitment. Reason. About Actions Plans 297–340 (1990)
6.
go back to reference C. Breazeal et al., Humanoid robots as cooperative partners for people. Int. J. Humanoid Robot. 1, 1–34 (2004)CrossRef C. Breazeal et al., Humanoid robots as cooperative partners for people. Int. J. Humanoid Robot. 1, 1–34 (2004)CrossRef
7.
go back to reference Z.M. Bi, L. Wang, Advances in 3D data acquisition and processing for industrial applications. Robot. Comput. Integr. Manuf. 26, 403–413 (2010)CrossRef Z.M. Bi, L. Wang, Advances in 3D data acquisition and processing for industrial applications. Robot. Comput. Integr. Manuf. 26, 403–413 (2010)CrossRef
8.
go back to reference B. Schmidt, L. Wang, Depth camera based collision avoidance via active robot control. J. Manuf. Syst. 33, 711–718 (2014)CrossRef B. Schmidt, L. Wang, Depth camera based collision avoidance via active robot control. J. Manuf. Syst. 33, 711–718 (2014)CrossRef
10.
go back to reference A. Bauer, D. Wollherr, M. Buss, Human–robot collaboration: a survey. Int. J. Humanoid Robot. 5, 47–66 (2008)CrossRef A. Bauer, D. Wollherr, M. Buss, Human–robot collaboration: a survey. Int. J. Humanoid Robot. 5, 47–66 (2008)CrossRef
11.
go back to reference S. Mitra, T. Acharya, Gesture recognition: a survey. IEEE Trans. Syst. Man, Cybern. Part C Appl. Rev. 37, 311–324 (2007)CrossRef S. Mitra, T. Acharya, Gesture recognition: a survey. IEEE Trans. Syst. Man, Cybern. Part C Appl. Rev. 37, 311–324 (2007)CrossRef
12.
go back to reference R. Parasuraman, T.B. Sheridan, C.D. Wickens, A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Humans. 30, 286–297 (2000)CrossRef R. Parasuraman, T.B. Sheridan, C.D. Wickens, A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man Cybern. Part A Syst. Humans. 30, 286–297 (2000)CrossRef
13.
go back to reference T.E. Starner, Visual Recognition of American Sign Language Using Hidden Markov Models (1995) T.E. Starner, Visual Recognition of American Sign Language Using Hidden Markov Models (1995)
14.
go back to reference T. Starner, J. Weaver, A. Pentland, Real-time american sign language recognition using desk and wearable computer based video, in IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20 (1998), pp. 1371–1375 T. Starner, J. Weaver, A. Pentland, Real-time american sign language recognition using desk and wearable computer based video, in IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20 (1998), pp. 1371–1375
15.
go back to reference N.R. Howe, M.E. Leventon, W.T. Freeman, Bayesian reconstruction of 3D human motion from single-camera video. NIPS 99, 820–826 (1999) N.R. Howe, M.E. Leventon, W.T. Freeman, Bayesian reconstruction of 3D human motion from single-camera video. NIPS 99, 820–826 (1999)
16.
go back to reference Y. Katsuki, Y. Yamakawa, M. Ishikawa, High-speed human/robot hand interaction system, in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts (2015), pp. 117–118 Y. Katsuki, Y. Yamakawa, M. Ishikawa, High-speed human/robot hand interaction system, in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts (2015), pp. 117–118
17.
go back to reference M. Elmezain, A. Al-Hamadi, J. Appenrodt, B. Michaelis, A hidden markov model-based continuous gesture recognition system for hand motion trajectory, in 19th International Conference on Pattern Recognition (2008), pp. 1–4 M. Elmezain, A. Al-Hamadi, J. Appenrodt, B. Michaelis, A hidden markov model-based continuous gesture recognition system for hand motion trajectory, in 19th International Conference on Pattern Recognition (2008), pp. 1–4
18.
go back to reference Y. Matsumoto, A. Zelinsky, An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement, in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (2000), pp. 499–504 Y. Matsumoto, A. Zelinsky, An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement, in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (2000), pp. 499–504
19.
go back to reference J.P. Wachs, M. Kölsch, H. Stern, Y. Edan, Vision-based hand-gesture applications. Commun. ACM 54, 60–71 (2011)CrossRef J.P. Wachs, M. Kölsch, H. Stern, Y. Edan, Vision-based hand-gesture applications. Commun. ACM 54, 60–71 (2011)CrossRef
20.
go back to reference J. Suarez, R.R. Murphy, Hand gesture recognition with depth images: a review. IEEE RO-MAN 411–417 (2012) J. Suarez, R.R. Murphy, Hand gesture recognition with depth images: a review. IEEE RO-MAN 411–417 (2012)
21.
go back to reference P. Doliotis, A. Stefan, C. McMurrough, D. Eckhard, V. Athitsos, Comparing gesture recognition accuracy using color and depth information, in Proceedings of the 4th International Conference on Pervasive Technologies Related to Assistive Environments (2011), p. 20 P. Doliotis, A. Stefan, C. McMurrough, D. Eckhard, V. Athitsos, Comparing gesture recognition accuracy using color and depth information, in Proceedings of the 4th International Conference on Pervasive Technologies Related to Assistive Environments (2011), p. 20
22.
go back to reference T. Sharp et al., Accurate, robust, and flexible real-time hand tracking, in Proceeding CHI (2015), p. 8 T. Sharp et al., Accurate, robust, and flexible real-time hand tracking, in Proceeding CHI (2015), p. 8
23.
go back to reference A. Erol, G. Bebis, M. Nicolescu, R.D. Boyle, X. Twombly, Vision-based hand pose estimation: a review. Comput. Vis. Image Underst. 108, 52–73 (2007)CrossRef A. Erol, G. Bebis, M. Nicolescu, R.D. Boyle, X. Twombly, Vision-based hand pose estimation: a review. Comput. Vis. Image Underst. 108, 52–73 (2007)CrossRef
25.
go back to reference Y. Zhang, C. Harrison, Tomo: wearable, low-cost electrical impedance tomography for hand gesture recognition, in Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (2015), pp. 167–173 Y. Zhang, C. Harrison, Tomo: wearable, low-cost electrical impedance tomography for hand gesture recognition, in Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (2015), pp. 167–173
26.
go back to reference N. Haroon, A.N. Malik, Multiple hand gesture recognition using surface EMG signals. J. Biomed. Eng. Med. Imaging 3, 1 (2016)CrossRef N. Haroon, A.N. Malik, Multiple hand gesture recognition using surface EMG signals. J. Biomed. Eng. Med. Imaging 3, 1 (2016)CrossRef
27.
go back to reference S. Roy, S. Ghosh, A. Barat, M. Chattopadhyay, D. Chowdhury, Artif. Intell. Evol. Comput. Engin. Syst. 357–364 (2016) S. Roy, S. Ghosh, A. Barat, M. Chattopadhyay, D. Chowdhury, Artif. Intell. Evol. Comput. Engin. Syst. 357–364 (2016)
29.
go back to reference J. Smith et al., Electric field sensing for graphical interfaces. Comput. Graph. Appl. IEEE 18, 54–60 (1998)CrossRef J. Smith et al., Electric field sensing for graphical interfaces. Comput. Graph. Appl. IEEE 18, 54–60 (1998)CrossRef
30.
go back to reference F. Adib, C.-Y. Hsu, H. Mao, D. Katabi, F. Durand, Capturing the human figure through a wall. ACM Trans. Graph. 34, 219 (2015)CrossRef F. Adib, C.-Y. Hsu, H. Mao, D. Katabi, F. Durand, Capturing the human figure through a wall. ACM Trans. Graph. 34, 219 (2015)CrossRef
31.
go back to reference F. Adib, D. Katabi, See through walls with WiFi! ACM. 43 (2013) F. Adib, D. Katabi, See through walls with WiFi! ACM. 43 (2013)
32.
go back to reference F. Adib, Z. Kabelac, D. Katabi, R.C. Miller, 3D tracking via body radio reflections. Usenix NSDI 14 (2014) F. Adib, Z. Kabelac, D. Katabi, R.C. Miller, 3D tracking via body radio reflections. Usenix NSDI 14 (2014)
33.
go back to reference J. Letessier, F. Bérard, Visual tracking of bare fingers for interactive surfaces, in Proceedings of the 17th annual ACM symposium on User interface software and technology (2004), pp. 119–122 J. Letessier, F. Bérard, Visual tracking of bare fingers for interactive surfaces, in Proceedings of the 17th annual ACM symposium on User interface software and technology (2004), pp. 119–122
34.
go back to reference D. Weinland, R. Ronfard, E. Boyer, A survey of vision-based methods for action representation, segmentation and recognition. Comput. Vis. Image Underst. 115, 224–241 (2011)CrossRef D. Weinland, R. Ronfard, E. Boyer, A survey of vision-based methods for action representation, segmentation and recognition. Comput. Vis. Image Underst. 115, 224–241 (2011)CrossRef
35.
go back to reference D.G. Lowe, Object recognition from local scale-invariant features, in Proceedings of 7th IEEE International Conference on Computer Vision, vol. 2 (1999), pp. 1150–1157 D.G. Lowe, Object recognition from local scale-invariant features, in Proceedings of 7th IEEE International Conference on Computer Vision, vol. 2 (1999), pp. 1150–1157
36.
go back to reference H. Bay, T. Tuytelaars, L. Van Gool, Computer vision—ECCV (2006), pp. 404–417 H. Bay, T. Tuytelaars, L. Van Gool, Computer visionECCV (2006), pp. 404–417
37.
go back to reference E. Rublee, V. Rabaud, K. Konolige, G. Bradski, ORB: an efficient alternative to SIFT or SURF, in IEEE International Conference on Computer Vision (ICCV) (2011), pp. 2564–2571 E. Rublee, V. Rabaud, K. Konolige, G. Bradski, ORB: an efficient alternative to SIFT or SURF, in IEEE International Conference on Computer Vision (ICCV) (2011), pp. 2564–2571
38.
go back to reference S. Belongie, J. Malik, J. Puzicha, Shape matching and object recognition using shape contexts. Pattern Anal. Mach. Intell. IEEE Trans. 24, 509–522 (2002)CrossRef S. Belongie, J. Malik, J. Puzicha, Shape matching and object recognition using shape contexts. Pattern Anal. Mach. Intell. IEEE Trans. 24, 509–522 (2002)CrossRef
39.
go back to reference B. Allen, B. Curless, Z. Popović, Articulated body deformation from range scan data. ACM Trans. Graph. 21, 612–619 (2002)CrossRef B. Allen, B. Curless, Z. Popović, Articulated body deformation from range scan data. ACM Trans. Graph. 21, 612–619 (2002)CrossRef
40.
go back to reference I. Oikonomidis, N. Kyriazis, A.A. Argyros, Efficient model-based 3D tracking of hand articulations using Kinect. BMVC 1, 3 (2011) I. Oikonomidis, N. Kyriazis, A.A. Argyros, Efficient model-based 3D tracking of hand articulations using Kinect. BMVC 1, 3 (2011)
41.
go back to reference R. Cutler, M. Turk, View-Based Interpretation of Real-Time Optical Flow for Gesture Recognition (1998), p. 416 R. Cutler, M. Turk, View-Based Interpretation of Real-Time Optical Flow for Gesture Recognition (1998), p. 416
42.
go back to reference J.L. Barron, D.J. Fleet, S.S. Beauchemin, Performance of optical flow techniques. Int. J. Comput. Vis. 12, 43–77 (1994)CrossRef J.L. Barron, D.J. Fleet, S.S. Beauchemin, Performance of optical flow techniques. Int. J. Comput. Vis. 12, 43–77 (1994)CrossRef
43.
go back to reference C. Thurau, V. Hlaváč, Pose primitive based human action recognition in videos or still images, in IEEE Conference on Computer Vision and Pattern Recognition (2008), pp. 1–8 C. Thurau, V. Hlaváč, Pose primitive based human action recognition in videos or still images, in IEEE Conference on Computer Vision and Pattern Recognition (2008), pp. 1–8
44.
go back to reference Q. Pu, S. Gupta, S. Gollakota, S. Patel, Whole-home gesture recognition using wireless signals, in Proceedings of the 19th Annual International Conference on Mobile Computing & Networking (2013), pp. 27–38 Q. Pu, S. Gupta, S. Gollakota, S. Patel, Whole-home gesture recognition using wireless signals, in Proceedings of the 19th Annual International Conference on Mobile Computing & Networking (2013), pp. 27–38
45.
go back to reference R. Ronfard, C. Schmid, B. Triggs, Computer Vision (2002), pp. 700–714 R. Ronfard, C. Schmid, B. Triggs, Computer Vision (2002), pp. 700–714
46.
go back to reference S.-J. Lee, C.-S. Ouyang, S.-H. Du, A neuro-fuzzy approach for segmentation of human objects in image sequences. Syst. Man Cybern. Part B Cybern. IEEE Trans. 33, 420–437 (2003)CrossRef S.-J. Lee, C.-S. Ouyang, S.-H. Du, A neuro-fuzzy approach for segmentation of human objects in image sequences. Syst. Man Cybern. Part B Cybern. IEEE Trans. 33, 420–437 (2003)CrossRef
47.
go back to reference D. Tang, H.J. Chang, A. Tejani, T.-K. Kim, Latent regression forest: structured estimation of 3D articulated hand posture, in IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 3786–3793 D. Tang, H.J. Chang, A. Tejani, T.-K. Kim, Latent regression forest: structured estimation of 3D articulated hand posture, in IEEE Conference on Computer Vision and Pattern Recognition (2014), pp. 3786–3793
48.
go back to reference J. Taylor, J. Shotton, T. Sharp, A. Fitzgibbon, The vitruvian manifold: inferring dense correspondences for one-shot human pose estimation, in IEEE Conference on Computer Vision and Pattern Recognition (2012), pp. 103–110 J. Taylor, J. Shotton, T. Sharp, A. Fitzgibbon, The vitruvian manifold: inferring dense correspondences for one-shot human pose estimation, in IEEE Conference on Computer Vision and Pattern Recognition (2012), pp. 103–110
49.
go back to reference J. Han, L. Shao, D. Xu, J. Shotton, Enhanced computer vision with microsoft kinect sensor: a review. Cybern. IEEE Trans. 43, 1318–1334 (2013)CrossRef J. Han, L. Shao, D. Xu, J. Shotton, Enhanced computer vision with microsoft kinect sensor: a review. Cybern. IEEE Trans. 43, 1318–1334 (2013)CrossRef
50.
go back to reference Y. Li, Hand gesture recognition using Kinect, in IEEE 3rd International Conference on Software Engineering and Service Science (2012), pp. 196–199 Y. Li, Hand gesture recognition using Kinect, in IEEE 3rd International Conference on Software Engineering and Service Science (2012), pp. 196–199
51.
go back to reference D. Comaniciu, V. Ramesh, P. Meer, Real-time tracking of non-rigid objects using mean shift. IEEE Conf. Comput. Vis. Pattern Recognit. 2, 142–149 (2000) D. Comaniciu, V. Ramesh, P. Meer, Real-time tracking of non-rigid objects using mean shift. IEEE Conf. Comput. Vis. Pattern Recognit. 2, 142–149 (2000)
52.
go back to reference S. Thrun, W. Burgard, D. Fox, Probabilistic Robotics (MIT Press, 2005) S. Thrun, W. Burgard, D. Fox, Probabilistic Robotics (MIT Press, 2005)
53.
go back to reference R.E. Kalman, A new approach to linear filtering and prediction problems. J. Fluids Eng. 82, 35–45 (1960) R.E. Kalman, A new approach to linear filtering and prediction problems. J. Fluids Eng. 82, 35–45 (1960)
54.
go back to reference S. Haykin, Kalman Filtering and Neural Networks, vol. 47 (Wiley, 2004) S. Haykin, Kalman Filtering and Neural Networks, vol. 47 (Wiley, 2004)
55.
go back to reference E. Wan, R. Van Der Merwe, The unscented Kalman filter for nonlinear estimation, in IEEE Adaptive Systems for Signal Processing, Communications, and Control Symposium (2000), pp. 153–158 E. Wan, R. Van Der Merwe, The unscented Kalman filter for nonlinear estimation, in IEEE Adaptive Systems for Signal Processing, Communications, and Control Symposium (2000), pp. 153–158
56.
go back to reference K. Okuma, A. Taleghani, N. De Freitas, J.J. Little, D.G. Lowe, Computer Vision (Springer, 2004), pp. 28–39 K. Okuma, A. Taleghani, N. De Freitas, J.J. Little, D.G. Lowe, Computer Vision (Springer, 2004), pp. 28–39
57.
go back to reference S. Oron, A. Bar-Hillel, D. Levi, S. Avidan, Locally orderless tracking, in IEEE Conference on Computer Vision and Pattern Recognition 1940–1947 (2012) S. Oron, A. Bar-Hillel, D. Levi, S. Avidan, Locally orderless tracking, in IEEE Conference on Computer Vision and Pattern Recognition 1940–1947 (2012)
58.
go back to reference J. Kwon, K.M. Lee, Tracking by sampling trackers, in IEEE International Conference on Computer Vision (2011), pp. 1195–1202 J. Kwon, K.M. Lee, Tracking by sampling trackers, in IEEE International Conference on Computer Vision (2011), pp. 1195–1202
59.
go back to reference J. Kwon, K.M. Lee, F.C. Park, Visual tracking via geometric particle filtering on the affine group with optimal importance functions, in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 991–998 J. Kwon, K.M. Lee, F.C. Park, Visual tracking via geometric particle filtering on the affine group with optimal importance functions, in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 991–998
60.
go back to reference R. Gao, L. Wang, R. Teti, D. Dornfeld, S. Kumara, M. Mori, M. Helu, Cloud-enabled prognosis for manufacturing. CIRP Ann. Technol. 64(2), 749–772 (2015)CrossRef R. Gao, L. Wang, R. Teti, D. Dornfeld, S. Kumara, M. Mori, M. Helu, Cloud-enabled prognosis for manufacturing. CIRP Ann. Technol. 64(2), 749–772 (2015)CrossRef
61.
go back to reference T. Li, S. Sun, T.P. Sattar, J.M. Corchado, Fight sample degeneracy and impoverishment in particle filters: a review of intelligent approaches. Expert Syst. Appl. 41, 3944–3954 (2014)CrossRef T. Li, S. Sun, T.P. Sattar, J.M. Corchado, Fight sample degeneracy and impoverishment in particle filters: a review of intelligent approaches. Expert Syst. Appl. 41, 3944–3954 (2014)CrossRef
62.
go back to reference T. Li, T.P. Sattar, S. Sun, Deterministic resampling: unbiased sampling to avoid sample impoverishment in particle filters. Sig. Process. 92, 1637–1645 (2012)CrossRef T. Li, T.P. Sattar, S. Sun, Deterministic resampling: unbiased sampling to avoid sample impoverishment in particle filters. Sig. Process. 92, 1637–1645 (2012)CrossRef
63.
go back to reference Rincón J.M. Del, D. Makris, C.O. Uruňuela, J.-C. Nebel, Tracking human position and lower body parts using Kalman and particle filters constrained by human biomechanics. Syst. Man. Cybern. Part B Cybern. IEEE Trans. 41, 26–37 (2011)CrossRef Rincón J.M. Del, D. Makris, C.O. Uruňuela, J.-C. Nebel, Tracking human position and lower body parts using Kalman and particle filters constrained by human biomechanics. Syst. Man. Cybern. Part B Cybern. IEEE Trans. 41, 26–37 (2011)CrossRef
64.
go back to reference D.A. Ross, J. Lim, R.-S. Lin, M.-H. Yang, Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77, 125–141 (2008)CrossRef D.A. Ross, J. Lim, R.-S. Lin, M.-H. Yang, Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77, 125–141 (2008)CrossRef
65.
go back to reference Z. Kalal, J. Matas, K. Mikolajczyk, Pn learning: bootstrapping binary classifiers by structural constraints, in IEEE Conference on Computer Vision and Pattern Recognition (2010), pp. 49–56 Z. Kalal, J. Matas, K. Mikolajczyk, Pn learning: bootstrapping binary classifiers by structural constraints, in IEEE Conference on Computer Vision and Pattern Recognition (2010), pp. 49–56
66.
go back to reference B. Babenko, M.-H. Yang, S. Belongie, Visual tracking with online multiple instance learning, in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 983–990 B. Babenko, M.-H. Yang, S. Belongie, Visual tracking with online multiple instance learning, in IEEE Conference on Computer Vision and Pattern Recognition (2009), pp. 983–990
67.
go back to reference A.W.M. Smeulders et al., Visual tracking: an experimental survey. Pattern Anal. Mach. Intell. IEEE Trans. 36, 1442–1468 (2014)CrossRef A.W.M. Smeulders et al., Visual tracking: an experimental survey. Pattern Anal. Mach. Intell. IEEE Trans. 36, 1442–1468 (2014)CrossRef
69.
go back to reference A.D. Wilson, A.F. Bobick, Parametric hidden markov models for gesture recognition. Pattern Anal. Mach. Intell. IEEE Trans. 21, 884–900 (1999)CrossRef A.D. Wilson, A.F. Bobick, Parametric hidden markov models for gesture recognition. Pattern Anal. Mach. Intell. IEEE Trans. 21, 884–900 (1999)CrossRef
70.
go back to reference S. Lu, J. Picone, S. Kong, Fingerspelling Alphabet Recognition Using A Two-level Hidden Markov Modeli in Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition (2013), p. 1 S. Lu, J. Picone, S. Kong, Fingerspelling Alphabet Recognition Using A Two-level Hidden Markov Modeli in Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition (2013), p. 1
71.
go back to reference J. McCormick, K. Vincs, S. Nahavandi, D. Creighton, S. Hutchison, Teaching a digital performing agent: artificial neural network and hidden Markov model for recognising and performing dance movement, in Proceedings of the 2014 International Workshop on Movement and Computing (2014), p. 70 J. McCormick, K. Vincs, S. Nahavandi, D. Creighton, S. Hutchison, Teaching a digital performing agent: artificial neural network and hidden Markov model for recognising and performing dance movement, in Proceedings of the 2014 International Workshop on Movement and Computing (2014), p. 70
73.
go back to reference M.A. Hearst, S.T. Dumais, E. Osman, J. Platt, B. Scholkopf, Support vector machines. IEEE Intell. Syst. their Appl. 13, 18–28 (1998)CrossRef M.A. Hearst, S.T. Dumais, E. Osman, J. Platt, B. Scholkopf, Support vector machines. IEEE Intell. Syst. their Appl. 13, 18–28 (1998)CrossRef
74.
go back to reference M.E. Tipping, Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 1, 211–244 (2001)MathSciNetMATH M.E. Tipping, Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 1, 211–244 (2001)MathSciNetMATH
75.
go back to reference B. Schiilkopf, The kernel trick for distances, in Proceedings of the 2000 Conference on Advances in Neural Information Processing Systems, vol. 13 (2001), p. 301 B. Schiilkopf, The kernel trick for distances, in Proceedings of the 2000 Conference on Advances in Neural Information Processing Systems, vol. 13 (2001), p. 301
76.
go back to reference A. Cenedese, G.A. Susto, G. Belgioioso, G.I. Cirillo, F. Fraccaroli, Home automation oriented gesture classification from inertial measurements. Autom. Sci. Eng. IEEE Trans. 12, 1200–1210 (2015)CrossRef A. Cenedese, G.A. Susto, G. Belgioioso, G.I. Cirillo, F. Fraccaroli, Home automation oriented gesture classification from inertial measurements. Autom. Sci. Eng. IEEE Trans. 12, 1200–1210 (2015)CrossRef
77.
go back to reference K. Feng, F. Yuan, Static hand gesture recognition based on HOG characters and support vector machines, in 2nd International Symposium on Instrumentation and Measurement, Sensor Network and Automation (2013), pp. 936–938 K. Feng, F. Yuan, Static hand gesture recognition based on HOG characters and support vector machines, in 2nd International Symposium on Instrumentation and Measurement, Sensor Network and Automation (2013), pp. 936–938
78.
go back to reference D. Ghimire, J. Lee, Geometric feature-based facial expression recognition in image sequences using multi-class adaboost and support vector machines. Sensors 13, 7714–7734 (2013)CrossRef D. Ghimire, J. Lee, Geometric feature-based facial expression recognition in image sequences using multi-class adaboost and support vector machines. Sensors 13, 7714–7734 (2013)CrossRef
79.
go back to reference O. Patsadu, C. Nukoolkit, B. Watanapa, Human gesture recognition using Kinect camera, in International Joint Conference on Computer Science and Software Engineering (2012), pp. 28–32 O. Patsadu, C. Nukoolkit, B. Watanapa, Human gesture recognition using Kinect camera, in International Joint Conference on Computer Science and Software Engineering (2012), pp. 28–32
80.
go back to reference R.E. Schapire, Nonlinear estimation and classification (Springer, 2003), pp. 149–171 R.E. Schapire, Nonlinear estimation and classification (Springer, 2003), pp. 149–171
81.
go back to reference Y. Freund, R.E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)MathSciNetCrossRefMATH Y. Freund, R.E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)MathSciNetCrossRefMATH
82.
go back to reference S. Celebi, A.S. Aydin, T.T. Temiz, T. Arici, Gesture recognition using skeleton data with weighted dynamic time warping. VISAPP 1, 620–625 (2013) S. Celebi, A.S. Aydin, T.T. Temiz, T. Arici, Gesture recognition using skeleton data with weighted dynamic time warping. VISAPP 1, 620–625 (2013)
83.
go back to reference E.J. Keogh, M.J. Pazzani, Derivative dynamic time warping. SDM 1, 5–7 (2001) E.J. Keogh, M.J. Pazzani, Derivative dynamic time warping. SDM 1, 5–7 (2001)
84.
go back to reference S.S. Haykin, Neural Networks and Learning Machines, vol. 3 (Pearson Education Upper Saddle River, 2009) S.S. Haykin, Neural Networks and Learning Machines, vol. 3 (Pearson Education Upper Saddle River, 2009)
85.
go back to reference T.H.H. Maung, Real-time hand tracking and gesture recognition system using neural networks. World Acad. Sci. Eng. Technol. 50, 466–470 (2009) T.H.H. Maung, Real-time hand tracking and gesture recognition system using neural networks. World Acad. Sci. Eng. Technol. 50, 466–470 (2009)
86.
go back to reference H. Hasan, S. Abdul-Kareem, Static hand gesture recognition using neural networks. Artif. Intell. Rev. 41, 147–181 (2014)CrossRef H. Hasan, S. Abdul-Kareem, Static hand gesture recognition using neural networks. Artif. Intell. Rev. 41, 147–181 (2014)CrossRef
87.
go back to reference T. D’Orazio, G. Attolico, G. Cicirelli, C. Guaragnella, A neural network approach for human gesture recognition with a kinect sensor. ICPRAM 741–746 (2014) T. D’Orazio, G. Attolico, G. Cicirelli, C. Guaragnella, A neural network approach for human gesture recognition with a kinect sensor. ICPRAM 741–746 (2014)
88.
go back to reference A.H. El-Baz, A.S. Tolba, An efficient algorithm for 3D hand gesture recognition using combined neural classifiers. Neural Comput. Appl. 22, 1477–1484 (2013)CrossRef A.H. El-Baz, A.S. Tolba, An efficient algorithm for 3D hand gesture recognition using combined neural classifiers. Neural Comput. Appl. 22, 1477–1484 (2013)CrossRef
89.
go back to reference K. Subramanian, S. Suresh, Human action recognition using meta-cognitive neuro-fuzzy inference system. Int. J. Neural Syst. 22, 1250028 (2012)CrossRef K. Subramanian, S. Suresh, Human action recognition using meta-cognitive neuro-fuzzy inference system. Int. J. Neural Syst. 22, 1250028 (2012)CrossRef
90.
91.
go back to reference Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature 521, 436–444 (2015)CrossRef Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature 521, 436–444 (2015)CrossRef
92.
go back to reference J. Schmidhuber, Deep learning in neural networks: an overview. Neural Netw 61, 85–117 (2015)CrossRef J. Schmidhuber, Deep learning in neural networks: an overview. Neural Netw 61, 85–117 (2015)CrossRef
93.
go back to reference J. Tompson, M. Stein, Y. Lecun, K. Perlin, Real-time continuous pose recovery of human hands using convolutional networks. ACM Trans. Graph 33, 169 (2014)CrossRef J. Tompson, M. Stein, Y. Lecun, K. Perlin, Real-time continuous pose recovery of human hands using convolutional networks. ACM Trans. Graph 33, 169 (2014)CrossRef
94.
go back to reference K. Simonyan, A. Zisserman, Two-stream convolutional networks for action recognition in videos. Adv. Neural Inf. Process. Syst. 568–576 (2014) K. Simonyan, A. Zisserman, Two-stream convolutional networks for action recognition in videos. Adv. Neural Inf. Process. Syst. 568–576 (2014)
95.
go back to reference J. Nagi et al., Max-pooling convolutional neural networks for vision-based hand gesture recognition, in IEEE International Conference on Signal and Image Processing Applications (2011), pp. 342–347 J. Nagi et al., Max-pooling convolutional neural networks for vision-based hand gesture recognition, in IEEE International Conference on Signal and Image Processing Applications (2011), pp. 342–347
96.
go back to reference A. Jain, J. Tompson, Y. LeCun, C. Bregler, Computer Vision (2014), pp. 302–315 A. Jain, J. Tompson, Y. LeCun, C. Bregler, Computer Vision (2014), pp. 302–315
97.
go back to reference K. Li, Y. Fu, Prediction of human activity by discovering temporal sequence patterns. IEEE Trans. Pattern Anal. Mach. Intell. 36, 1644–1657 (2014)CrossRef K. Li, Y. Fu, Prediction of human activity by discovering temporal sequence patterns. IEEE Trans. Pattern Anal. Mach. Intell. 36, 1644–1657 (2014)CrossRef
98.
go back to reference M.S. Ryoo, Human activity prediction: early recognition of ongoing activities from streaming videos, in IEEE International Conference on Computer Vision (2011), pp. 1036–1043 M.S. Ryoo, Human activity prediction: early recognition of ongoing activities from streaming videos, in IEEE International Conference on Computer Vision (2011), pp. 1036–1043
99.
go back to reference W. Ding, K. Liu, F. Cheng, J. Zhang, Learning hierarchical spatio-temporal pattern for human activity prediction. J. Vis. Commun. Image Represent. 35, 103–111 (2016)CrossRef W. Ding, K. Liu, F. Cheng, J. Zhang, Learning hierarchical spatio-temporal pattern for human activity prediction. J. Vis. Commun. Image Represent. 35, 103–111 (2016)CrossRef
100.
go back to reference L.R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition. IEEE Proc. 77, 257–286 (1989)CrossRef L.R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition. IEEE Proc. 77, 257–286 (1989)CrossRef
Metadata
Title
Context-Aware Human-Robot Collaborative Assembly
Authors
Lihui Wang
Xi Vincent Wang
Copyright Year
2018
DOI
https://doi.org/10.1007/978-3-319-67693-7_11

Premium Partners