Skip to main content

2018 | OriginalPaper | Buchkapitel

Augmented, Mixed, and Virtual Reality Enabling of Robot Deixis

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

When humans interact with each other, they often make use of deictic gestures such as pointing to help pick out targets of interest to their conversation. In the field of Human-Robot Interaction, research has repeatedly demonstrated the utility of enabling robots to use such gestures as well. Recent work in augmented, mixed, and virtual reality stands to enable enormous advances in robot deixis, both by allowing robots to gesture in ways that were not previously feasible, and by enabling gesture on robotic platforms and environmental contexts in which gesture was not previously feasible. In this paper, we summarize our own recent work on using augmented, mixed, and virtual-reality techniques to advance the state-of-the-art of robot-generated deixis.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
Excepting, for the purposes of this paper, robots who are distributed across multiple sub-bodies in the environment [38].
 
2
The SVD, while more expensive to compute, provides better accuracy and numerical stability for Cartesian control than the LU decomposition.
 
3
While in this work we use the Softbank Pepper robot, our general framework is not necessarily specific to this particular robot.
 
4
In this work, we use a single camera, as Pepper has a single RGB camera rather than stereo cameras. In the future, we hope to use stereoscopic vision as input for a more immersive VR experience. In addition, images could be just as easily streamed to this app from a ROS simulator (e.g., Gazebo [61]) or from some other source.
 
Literatur
1.
Zurück zum Zitat McNeill, D.: Hand and Mind: What Gestures Reveal About Thought. University of Chicago Press, Chicago (1992) McNeill, D.: Hand and Mind: What Gestures Reveal About Thought. University of Chicago Press, Chicago (1992)
2.
Zurück zum Zitat Fillmore, C.J.: Towards a descriptive framework for spatial deixis. In: Speech, Place and Action: Studies in Deixis and Related Topics, pp. 31–59 (1982) Fillmore, C.J.: Towards a descriptive framework for spatial deixis. In: Speech, Place and Action: Studies in Deixis and Related Topics, pp. 31–59 (1982)
3.
Zurück zum Zitat Salem, M., Kopp, S., Wachsmuth, I., Rohlfing, K., Joublin, F.: Generation and evaluation of communicative robot gesture. Int. J. Soc. Robot. 4(2), 201–217 (2012)CrossRef Salem, M., Kopp, S., Wachsmuth, I., Rohlfing, K., Joublin, F.: Generation and evaluation of communicative robot gesture. Int. J. Soc. Robot. 4(2), 201–217 (2012)CrossRef
4.
Zurück zum Zitat Huang, C.M., Mutlu, B.: Modeling and evaluating narrative gestures for humanlike robots. In: Robotics: Science and Systems, pp. 57–64 (2013) Huang, C.M., Mutlu, B.: Modeling and evaluating narrative gestures for humanlike robots. In: Robotics: Science and Systems, pp. 57–64 (2013)
5.
Zurück zum Zitat Holladay, R.M., Dragan, A.D., Srinivasa, S.S.: Legible robot pointing. In: 2014 RO-MAN: The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp. 217–223. IEEE (2014) Holladay, R.M., Dragan, A.D., Srinivasa, S.S.: Legible robot pointing. In: 2014 RO-MAN: The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pp. 217–223. IEEE (2014)
6.
Zurück zum Zitat Sauppé, A., Mutlu, B.: Robot deictics: how gesture and context shape referential communication. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, pp. 342–349. ACM (2014) Sauppé, A., Mutlu, B.: Robot deictics: how gesture and context shape referential communication. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, pp. 342–349. ACM (2014)
7.
Zurück zum Zitat Gulzar, K., Kyrki, V.: See what i mean-probabilistic optimization of robot pointing gestures. In: Proceedings of the Fifteenth IEEE/RAS International Conference on Humanoid Robots (Humanoids), pp. 953–958. IEEE (2015) Gulzar, K., Kyrki, V.: See what i mean-probabilistic optimization of robot pointing gestures. In: Proceedings of the Fifteenth IEEE/RAS International Conference on Humanoid Robots (Humanoids), pp. 953–958. IEEE (2015)
8.
Zurück zum Zitat Admoni, H., Weng, T., Scassellati, B.: Modeling communicative behaviors for object references in human-robot interaction. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 3352–3359. IEEE (2016) Admoni, H., Weng, T., Scassellati, B.: Modeling communicative behaviors for object references in human-robot interaction. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp. 3352–3359. IEEE (2016)
9.
Zurück zum Zitat Williams, T., Szafir, D., Chakraborti, T., Ben Amor, H.: Virtual, augmented, and mixed reality for human-robot interaction. In: Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 403–404. ACM (2018) Williams, T., Szafir, D., Chakraborti, T., Ben Amor, H.: Virtual, augmented, and mixed reality for human-robot interaction. In: Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 403–404. ACM (2018)
10.
Zurück zum Zitat Williams, T.: A consultant framework for natural language processing in integrated robot architectures. IEEE Intell. Inform. Bull. 18(1), 10–14 (2017) Williams, T.: A consultant framework for natural language processing in integrated robot architectures. IEEE Intell. Inform. Bull. 18(1), 10–14 (2017)
11.
Zurück zum Zitat Williams, T.: Situated natural language interaction in uncertain and open worlds. Ph.D. thesis, Tufts University (2017)CrossRef Williams, T.: Situated natural language interaction in uncertain and open worlds. Ph.D. thesis, Tufts University (2017)CrossRef
12.
Zurück zum Zitat Kunze, L., Williams, T., Hawes, N., Scheutz, M.: Spatial referring expression generation for HRI: Algorithms and evaluation framework. In: AAAI Fall Symposium on AI and HRI (2017) Kunze, L., Williams, T., Hawes, N., Scheutz, M.: Spatial referring expression generation for HRI: Algorithms and evaluation framework. In: AAAI Fall Symposium on AI and HRI (2017)
13.
Zurück zum Zitat Williams, T., Scheutz, M.: Referring expression generation under uncertainty: algorithm and evaluation framework. In: Proceedings of the 10th International Conference on Natural Language Generation (2017) Williams, T., Scheutz, M.: Referring expression generation under uncertainty: algorithm and evaluation framework. In: Proceedings of the 10th International Conference on Natural Language Generation (2017)
14.
Zurück zum Zitat Williams, T., Scheutz, M.: Resolution of referential ambiguity in human-robot dialogue using dempster-shafer theoretic pragmatics. In: Proceedings of Robotics: Science and Systems (2017) Williams, T., Scheutz, M.: Resolution of referential ambiguity in human-robot dialogue using dempster-shafer theoretic pragmatics. In: Proceedings of Robotics: Science and Systems (2017)
15.
Zurück zum Zitat Briggs, G., Williams, T., Scheutz, M.: Enabling robots to understand indirect speech acts in task-based interactions. J. Hum.-Robot Interact. 26, 64–94 (2017)CrossRef Briggs, G., Williams, T., Scheutz, M.: Enabling robots to understand indirect speech acts in task-based interactions. J. Hum.-Robot Interact. 26, 64–94 (2017)CrossRef
16.
Zurück zum Zitat Williams, T., Briggs, G., Oosterveld, B., Scheutz, M.: Going beyond literal command-based instructions: extending robotic natural language interaction capabilities. In: Proceedings of the 29th AAAI Conference on Artificial Intelligence (2015) Williams, T., Briggs, G., Oosterveld, B., Scheutz, M.: Going beyond literal command-based instructions: extending robotic natural language interaction capabilities. In: Proceedings of the 29th AAAI Conference on Artificial Intelligence (2015)
17.
Zurück zum Zitat Williams, T., Thames, D., Novakoff, J., Scheutz, M.: Thank you for sharing that interesting fact!: effects of capability and context on indirect speech act use in task-based human-robot dialogue. In: Proceedings of the 13th ACM/IEEE International Conference on Human-Robot Interaction (2018) Williams, T., Thames, D., Novakoff, J., Scheutz, M.: Thank you for sharing that interesting fact!: effects of capability and context on indirect speech act use in task-based human-robot dialogue. In: Proceedings of the 13th ACM/IEEE International Conference on Human-Robot Interaction (2018)
18.
Zurück zum Zitat Fussell, S.R., Setlock, L.D., Kraut, R.E.: Effects of head-mounted and scene-oriented video systems on remote collaboration on physical tasks. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 513–520. ACM (2003) Fussell, S.R., Setlock, L.D., Kraut, R.E.: Effects of head-mounted and scene-oriented video systems on remote collaboration on physical tasks. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 513–520. ACM (2003)
19.
Zurück zum Zitat Kraut, R.E., Fussell, S.R., Siegel, J.: Visual information as a conversational resource in collaborative physical tasks. Hum.-Comput. Interact. 18(1), 13–49 (2003)CrossRef Kraut, R.E., Fussell, S.R., Siegel, J.: Visual information as a conversational resource in collaborative physical tasks. Hum.-Comput. Interact. 18(1), 13–49 (2003)CrossRef
20.
Zurück zum Zitat Datcu, D., Cidota, M., Lukosch, H., Lukosch, S.: On the usability of augmented reality for information exchange in teams from the security domain. In: 2014 IEEE Joint Intelligence and Security Informatics Conference (JISIC), pp. 160–167. IEEE (2014) Datcu, D., Cidota, M., Lukosch, H., Lukosch, S.: On the usability of augmented reality for information exchange in teams from the security domain. In: 2014 IEEE Joint Intelligence and Security Informatics Conference (JISIC), pp. 160–167. IEEE (2014)
21.
Zurück zum Zitat White, S., Lister, L., Feiner, S.: Visual hints for tangible gestures in augmented reality. In: 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR 2007, pp. 47–50. IEEE (2007) White, S., Lister, L., Feiner, S.: Visual hints for tangible gestures in augmented reality. In: 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR 2007, pp. 47–50. IEEE (2007)
22.
Zurück zum Zitat Fussell, S.R., Setlock, L.D., Yang, J., Ou, J., Mauer, E., Kramer, A.D.: Gestures over video streams to support remote collaboration on physical tasks. Hum.-Comput. Interact. 19(3), 273–309 (2004)CrossRef Fussell, S.R., Setlock, L.D., Yang, J., Ou, J., Mauer, E., Kramer, A.D.: Gestures over video streams to support remote collaboration on physical tasks. Hum.-Comput. Interact. 19(3), 273–309 (2004)CrossRef
23.
Zurück zum Zitat Wahlster, W., André, E., Graf, W., Rist, T.: Designing illustrated texts: how language production is influenced by graphics generation. In: Proceedings of the Fifth Conference on European Chapter of the Association for Computational Linguistics, pp. 8–14. Association for Computational Linguistics (1991) Wahlster, W., André, E., Graf, W., Rist, T.: Designing illustrated texts: how language production is influenced by graphics generation. In: Proceedings of the Fifth Conference on European Chapter of the Association for Computational Linguistics, pp. 8–14. Association for Computational Linguistics (1991)
24.
Zurück zum Zitat Wazinski, P.: Generating spatial descriptions for cross-modal references. In: Proceedings of the Third Conference on Applied Natural Language Processing, pp. 56–63. Association for Computational Linguistics (1992) Wazinski, P.: Generating spatial descriptions for cross-modal references. In: Proceedings of the Third Conference on Applied Natural Language Processing, pp. 56–63. Association for Computational Linguistics (1992)
25.
Zurück zum Zitat Green, S.A., Billinghurst, M., Chen, X., Chase, J.G.: Human robot collaboration: an augmented reality approacha literature review and analysis. In: Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. 117–126. American Society of Mechanical Engineers (2007) Green, S.A., Billinghurst, M., Chen, X., Chase, J.G.: Human robot collaboration: an augmented reality approacha literature review and analysis. In: Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pp. 117–126. American Society of Mechanical Engineers (2007)
26.
Zurück zum Zitat Green, S., Billinghurst, M., Chen, X., Chase, G.: Human-robot collaboration: a literature review and augmented reality approach in design. Int. J. Adv. Robot. Syst. 5, 1 (2008)CrossRef Green, S., Billinghurst, M., Chen, X., Chase, G.: Human-robot collaboration: a literature review and augmented reality approach in design. Int. J. Adv. Robot. Syst. 5, 1 (2008)CrossRef
27.
Zurück zum Zitat Billinghurst, M., Clark, A., Lee, G.: A survey of augmented reality. Found. Trends Hum.-Comput. Interact. 8(2–3), 73–272 (2015)CrossRef Billinghurst, M., Clark, A., Lee, G.: A survey of augmented reality. Found. Trends Hum.-Comput. Interact. 8(2–3), 73–272 (2015)CrossRef
28.
Zurück zum Zitat Sibirtseva, E., Kontogiorgos, D., Nykvist, O., Karaoguz, H., Leite, I., Gustafson, J., Kragic, D.: A comparison of visualisation methods for disambiguating verbal requests in human-robot interaction. arXiv preprint arXiv:1801.08760 (2018) Sibirtseva, E., Kontogiorgos, D., Nykvist, O., Karaoguz, H., Leite, I., Gustafson, J., Kragic, D.: A comparison of visualisation methods for disambiguating verbal requests in human-robot interaction. arXiv preprint arXiv:​1801.​08760 (2018)
29.
Zurück zum Zitat Green, S.A., Chase, J.G., Chen, X., Billinghurst, M.: Evaluating the augmented reality human-robot collaboration system. Int. J. Intell. Syst. Technol. Appl. 8(1–4), 130–143 (2009) Green, S.A., Chase, J.G., Chen, X., Billinghurst, M.: Evaluating the augmented reality human-robot collaboration system. Int. J. Intell. Syst. Technol. Appl. 8(1–4), 130–143 (2009)
30.
Zurück zum Zitat Andersen, R.S., Madsen, O., Moeslund, T.B., Amor, H.B.: Projecting robot intentions into human environments. In: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 294–301. IEEE (2016) Andersen, R.S., Madsen, O., Moeslund, T.B., Amor, H.B.: Projecting robot intentions into human environments. In: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 294–301. IEEE (2016)
31.
Zurück zum Zitat Chadalavada, R.T., Andreasson, H., Krug, R., Lilienthal, A.J.: That’s on my mind! robot to human intention communication through on-board projection on shared floor space. In: 2015 European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2015) Chadalavada, R.T., Andreasson, H., Krug, R., Lilienthal, A.J.: That’s on my mind! robot to human intention communication through on-board projection on shared floor space. In: 2015 European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2015)
32.
Zurück zum Zitat Frank, J.A., Moorhead, M., Kapila, V.: Mobile mixed-reality interfaces that enhance human-robot interaction in shared spaces. Front. Robot. AI 4, 20 (2017)CrossRef Frank, J.A., Moorhead, M., Kapila, V.: Mobile mixed-reality interfaces that enhance human-robot interaction in shared spaces. Front. Robot. AI 4, 20 (2017)CrossRef
33.
Zurück zum Zitat Ganesan, R.K.: Mediating human-robot collaboration through mixed reality cues. Master’s thesis, Arizona State University (2017) Ganesan, R.K.: Mediating human-robot collaboration through mixed reality cues. Master’s thesis, Arizona State University (2017)
34.
Zurück zum Zitat Katzakis, N., Steinicke, F.: Excuse me! perception of abrupt direction changes using body cues and paths on mixed reality avatars. arXiv preprint arXiv:1801.05085 (2018) Katzakis, N., Steinicke, F.: Excuse me! perception of abrupt direction changes using body cues and paths on mixed reality avatars. arXiv preprint arXiv:​1801.​05085 (2018)
35.
Zurück zum Zitat Rosen, E., Whitney, D., Phillips, E., Chien, G., Tompkin, J., Konidaris, G., Tellex, S.: Communicating robot arm motion intent through mixed reality head-mounted displays. arXiv preprint arXiv:1708.03655 (2017) Rosen, E., Whitney, D., Phillips, E., Chien, G., Tompkin, J., Konidaris, G., Tellex, S.: Communicating robot arm motion intent through mixed reality head-mounted displays. arXiv preprint arXiv:​1708.​03655 (2017)
36.
Zurück zum Zitat Walker, M., Hedayati, H., Lee, J., Szafir, D.: Communicating robot motion intent with augmented reality. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 316–324. ACM (2018) Walker, M., Hedayati, H., Lee, J., Szafir, D.: Communicating robot motion intent with augmented reality. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 316–324. ACM (2018)
37.
Zurück zum Zitat Williams, T.: A framework for robot-generated mixed-reality deixis. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018) Williams, T.: A framework for robot-generated mixed-reality deixis. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)
38.
Zurück zum Zitat Oosterveld, B., Brusatin, L., Scheutz, M.: Two bots, one brain: component sharing in cognitive robotic architectures. In: Proceedings of 12th ACM/IEEE International Conference on Human-Robot Interaction Video Contest (2017) Oosterveld, B., Brusatin, L., Scheutz, M.: Two bots, one brain: component sharing in cognitive robotic architectures. In: Proceedings of 12th ACM/IEEE International Conference on Human-Robot Interaction Video Contest (2017)
39.
Zurück zum Zitat Correia, F., Alves-Oliveira, P., Maia, N., Ribeiro, T., Petisca, S., Melo, F.S., Paiva, A.: Just follow the suit! trust in human-robot interactions during card game playing. In: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 507–512. IEEE (2016) Correia, F., Alves-Oliveira, P., Maia, N., Ribeiro, T., Petisca, S., Melo, F.S., Paiva, A.: Just follow the suit! trust in human-robot interactions during card game playing. In: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 507–512. IEEE (2016)
40.
Zurück zum Zitat Bethel, C.L., Carruth, D., Garrison, T.: Discoveries from integrating robots into swat team training exercises. In: 2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–8. IEEE (2012) Bethel, C.L., Carruth, D., Garrison, T.: Discoveries from integrating robots into swat team training exercises. In: 2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 1–8. IEEE (2012)
41.
Zurück zum Zitat Goldfine, S.: Assessing the prospects of security robots, October 2017 Goldfine, S.: Assessing the prospects of security robots, October 2017
42.
Zurück zum Zitat Kotchetkov, I.S., Hwang, B.Y., Appelboom, G., Kellner, C.P., Connolly Jr., E.S.: Brain-computer interfaces: military, neurosurgical, and ethical perspective. Neurosurg. Focus 28(5), E25 (2010)CrossRef Kotchetkov, I.S., Hwang, B.Y., Appelboom, G., Kellner, C.P., Connolly Jr., E.S.: Brain-computer interfaces: military, neurosurgical, and ethical perspective. Neurosurg. Focus 28(5), E25 (2010)CrossRef
43.
Zurück zum Zitat Dragan, A.D., Lee, K.C., Srinivasa, S.S.: Legibility and predictability of robot motion. In: 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 301–308. IEEE (2013) Dragan, A.D., Lee, K.C., Srinivasa, S.S.: Legibility and predictability of robot motion. In: 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 301–308. IEEE (2013)
45.
Zurück zum Zitat Dantam, N.T., Chaudhuri, S., Kavraki, L.E.: The task motion kit. Robot. Autom. Mag. (2018, accepted) Dantam, N.T., Chaudhuri, S., Kavraki, L.E.: The task motion kit. Robot. Autom. Mag. (2018, accepted)
46.
Zurück zum Zitat Diankov, R.: Automated construction of robotic manipulation programs. Ph.D. thesis, Carnegie Mellon University, Robotics Institute, August 2010 Diankov, R.: Automated construction of robotic manipulation programs. Ph.D. thesis, Carnegie Mellon University, Robotics Institute, August 2010
47.
Zurück zum Zitat Hartenberg, R.S., Denavit, J.: Kinematic Synthesis of Linkages. McGraw-Hill, New York (1964)MATH Hartenberg, R.S., Denavit, J.: Kinematic Synthesis of Linkages. McGraw-Hill, New York (1964)MATH
50.
Zurück zum Zitat Tran, N., Rands, J., Williams, T.: A hands-free virtual-reality teleoperation interface for wizard-of-oz control. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018) Tran, N., Rands, J., Williams, T.: A hands-free virtual-reality teleoperation interface for wizard-of-oz control. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)
51.
Zurück zum Zitat Riek, L.D.: Wizard of oz studies in HRI: a systematic review and new reporting guidelines. J. Hum.-Robot Interact. 1(1), 119–136 (2012)CrossRef Riek, L.D.: Wizard of oz studies in HRI: a systematic review and new reporting guidelines. J. Hum.-Robot Interact. 1(1), 119–136 (2012)CrossRef
52.
Zurück zum Zitat Bonial, C., Marge, M., Foots, A., Gervits, F., Hayes, C.J., Henry, C., Hill, S.G., Leuski, A., Lukin, S.M., Moolchandani, P., et al.: Laying down the yellow brick road: development of a wizard-of-oz interface for collecting human-robot dialogue. arXiv preprint arXiv:1710.06406 (2017) Bonial, C., Marge, M., Foots, A., Gervits, F., Hayes, C.J., Henry, C., Hill, S.G., Leuski, A., Lukin, S.M., Moolchandani, P., et al.: Laying down the yellow brick road: development of a wizard-of-oz interface for collecting human-robot dialogue. arXiv preprint arXiv:​1710.​06406 (2017)
53.
Zurück zum Zitat Chen, J.Y.C., Haas, E.C., Barnes, M.J.: Human performance issues and user interface design for teleoperated robots. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 37(6), 1231–1245 (2007)CrossRef Chen, J.Y.C., Haas, E.C., Barnes, M.J.: Human performance issues and user interface design for teleoperated robots. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 37(6), 1231–1245 (2007)CrossRef
54.
Zurück zum Zitat Nawab, A., Chintamani, K., Ellis, D., Auner, G., Pandya, A.: Joystick mapped augmented reality cues for end-effector controlled tele-operated robots. In: 2007 IEEE Virtual Reality Conference, pp. 263–266, March 2007 Nawab, A., Chintamani, K., Ellis, D., Auner, G., Pandya, A.: Joystick mapped augmented reality cues for end-effector controlled tele-operated robots. In: 2007 IEEE Virtual Reality Conference, pp. 263–266, March 2007
55.
Zurück zum Zitat Hedayati, H., Walker, M., Szafir, D.: Improving collocated robot teleoperation with augmented reality. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 78–86. ACM (2018) Hedayati, H., Walker, M., Szafir, D.: Improving collocated robot teleoperation with augmented reality. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 78–86. ACM (2018)
56.
Zurück zum Zitat Miner, N.E., Stansfield, S.A.: An interactive virtual reality simulation system for robot control and operator training. In: Proceedings of the 1994 IEEE International Conference on Robotics and Automation, vol. 2, pp. 1428–1435, May 1994 Miner, N.E., Stansfield, S.A.: An interactive virtual reality simulation system for robot control and operator training. In: Proceedings of the 1994 IEEE International Conference on Robotics and Automation, vol. 2, pp. 1428–1435, May 1994
57.
Zurück zum Zitat Rakita, D., Mutlu, B., Gleicher, M.: An autonomous dynamic camera method for effective remote teleoperation. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 325–333. ACM (2018) Rakita, D., Mutlu, B., Gleicher, M.: An autonomous dynamic camera method for effective remote teleoperation. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 325–333. ACM (2018)
58.
Zurück zum Zitat Pititeeraphab, Y., Choitkunnan, P., Thongpance, N., Kullathum, K., Pintavirooj, C.: Robot-arm control system using leap motion controller. In: 2016 International Conference on Biomedical Engineering (BME-HUST), pp. 109–112, October 2016 Pititeeraphab, Y., Choitkunnan, P., Thongpance, N., Kullathum, K., Pintavirooj, C.: Robot-arm control system using leap motion controller. In: 2016 International Conference on Biomedical Engineering (BME-HUST), pp. 109–112, October 2016
59.
Zurück zum Zitat Bennett, M., Williams, T., Thames, D., Scheutz, M.: Differences in interaction patterns and perception for teleoperated and autonomous humanoid robots. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2017) Bennett, M., Williams, T., Thames, D., Scheutz, M.: Differences in interaction patterns and perception for teleoperated and autonomous humanoid robots. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2017)
60.
Zurück zum Zitat Quigley, M., Faust, J., Foote, T., Leibs, J.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software (2009) Quigley, M., Faust, J., Foote, T., Leibs, J.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software (2009)
61.
Zurück zum Zitat Koenig, N., Howard, A.: Design and use paradigms for Gazebo, an open-source multi-robot simulator. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2149–2154. IEEE (2004) Koenig, N., Howard, A.: Design and use paradigms for Gazebo, an open-source multi-robot simulator. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2149–2154. IEEE (2004)
62.
Zurück zum Zitat Gong, L., Gong, C., Ma, Z., Zhao, L., Wang, Z., Li, X., Jing, X., Yang, H., Liu, C.: Real-time human-in-the-loop remote control for a life-size traffic police robot with multiple augmented reality aided display terminals. In: 2017 2nd International Conference on Advanced Robotics and Mechatronics (ICARM), pp. 420–425, August 2017 Gong, L., Gong, C., Ma, Z., Zhao, L., Wang, Z., Li, X., Jing, X., Yang, H., Liu, C.: Real-time human-in-the-loop remote control for a life-size traffic police robot with multiple augmented reality aided display terminals. In: 2017 2nd International Conference on Advanced Robotics and Mechatronics (ICARM), pp. 420–425, August 2017
63.
Zurück zum Zitat Quintero, C.P., Ramirez, O.A., Jägersand, M.: VIBI: assistive vision-based interface for robot manipulation. In: ICRA, pp. 4458–4463 (2015) Quintero, C.P., Ramirez, O.A., Jägersand, M.: VIBI: assistive vision-based interface for robot manipulation. In: ICRA, pp. 4458–4463 (2015)
64.
Zurück zum Zitat Hashimoto, S., Ishida, A., Inami, M., Igarashi, T.: TouchMe: an augmented reality based remote robot manipulation. In: The 21st International Conference on Artificial Reality and Telexistence, Proceedings of ICAT2011, vol. 2 (2011) Hashimoto, S., Ishida, A., Inami, M., Igarashi, T.: TouchMe: an augmented reality based remote robot manipulation. In: The 21st International Conference on Artificial Reality and Telexistence, Proceedings of ICAT2011, vol. 2 (2011)
65.
Zurück zum Zitat Pereira, A., Carter, E.J., Leite, I., Mars, J., Lehman, J.F.: Augmented reality dialog interface for multimodal teleoperation. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (2017) Pereira, A., Carter, E.J., Leite, I., Mars, J., Lehman, J.F.: Augmented reality dialog interface for multimodal teleoperation. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (2017)
66.
Zurück zum Zitat Haring, K.S., Finomore, V., Muramato, D., Tenhundfeld, N.L., Redd, M., Wen, J., Tidball, B.: Analysis of using virtual reality (VR) for command and control applications of multi-robot systems. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018) Haring, K.S., Finomore, V., Muramato, D., Tenhundfeld, N.L., Redd, M., Wen, J., Tidball, B.: Analysis of using virtual reality (VR) for command and control applications of multi-robot systems. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)
67.
Zurück zum Zitat Lipton, J.I., Fay, A.J., Rus, D.: Baxter’s homunculus: virtual reality spaces for teleoperation in manufacturing. IEEE Robot. Autom. Lett. 3(1), 179–186 (2018)CrossRef Lipton, J.I., Fay, A.J., Rus, D.: Baxter’s homunculus: virtual reality spaces for teleoperation in manufacturing. IEEE Robot. Autom. Lett. 3(1), 179–186 (2018)CrossRef
68.
Zurück zum Zitat Oh, Y., Parasuraman, R., McGraw, T., Min, B.C.: 360 VR based robot teleoperation interface for virtual tour. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018) Oh, Y., Parasuraman, R., McGraw, T., Min, B.C.: 360 VR based robot teleoperation interface for virtual tour. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)
69.
Zurück zum Zitat Rosen, E., Whitney, D., Phillips, E., Ullman, D., Tellex, S.: Testing robot teleoperation using a virtual reality interface with ROS reality. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018) Rosen, E., Whitney, D., Phillips, E., Ullman, D., Tellex, S.: Testing robot teleoperation using a virtual reality interface with ROS reality. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)
70.
Zurück zum Zitat Gaurav, S., Al-Qurashi, Z., Barapatre, A., Ziebart, B.: Enabling effective robotic teleoperation using virtual reality and correspondence learning via neural network. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018) Gaurav, S., Al-Qurashi, Z., Barapatre, A., Ziebart, B.: Enabling effective robotic teleoperation using virtual reality and correspondence learning via neural network. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)
71.
Zurück zum Zitat Whitney, D., Rosen, E., Phillips, E., Konidaris, G., Tellex, S.: Comparing robot grasping teleoperation across desktop and virtual reality with ROS reality. In: Proceedings of the International Symposium on Robotics Research (2017) Whitney, D., Rosen, E., Phillips, E., Konidaris, G., Tellex, S.: Comparing robot grasping teleoperation across desktop and virtual reality with ROS reality. In: Proceedings of the International Symposium on Robotics Research (2017)
72.
Zurück zum Zitat Allspaw, J., Roche, J., Lemiesz, N., Yannuzzi, M., Yanco, H.A.: Remotely teleoperating a humanoid robot to perform fine motor tasks with virtual reality-. In: Proceedings of the 2018 Conference on Waste Management (2018) Allspaw, J., Roche, J., Lemiesz, N., Yannuzzi, M., Yanco, H.A.: Remotely teleoperating a humanoid robot to perform fine motor tasks with virtual reality-. In: Proceedings of the 2018 Conference on Waste Management (2018)
73.
Zurück zum Zitat Allspaw, J., Roche, J., Norton, A., Yanco, H.A.: Remotely teleoperating a humanoid robot to perform fine motor tasks with virtual reality-. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018) Allspaw, J., Roche, J., Norton, A., Yanco, H.A.: Remotely teleoperating a humanoid robot to perform fine motor tasks with virtual reality-. In: Proceedings of the 1st International Workshop on Virtual, Augmented, and Mixed Reality for HRI (VAM-HRI) (2018)
74.
Zurück zum Zitat Bassily, D., Georgoulas, C., Guettler, J., Linner, T., Bock, T.: Intuitive and adaptive robotic arm manipulation using the leap motion controller. In: 41st International Symposium on Robotics ISR/Robotik 2014, pp. 1–7, June 2014 Bassily, D., Georgoulas, C., Guettler, J., Linner, T., Bock, T.: Intuitive and adaptive robotic arm manipulation using the leap motion controller. In: 41st International Symposium on Robotics ISR/Robotik 2014, pp. 1–7, June 2014
75.
Zurück zum Zitat Lin, Y., Song, S., Meng, M.Q.H.: The implementation of augmented reality in a robotic teleoperation system. In: IEEE International Conference on Real-Time Computing and Robotics (RCAR), pp. 134–139. IEEE (2016) Lin, Y., Song, S., Meng, M.Q.H.: The implementation of augmented reality in a robotic teleoperation system. In: IEEE International Conference on Real-Time Computing and Robotics (RCAR), pp. 134–139. IEEE (2016)
76.
Zurück zum Zitat Weichert, F., Bachmann, D., Rudak, B., Fisseler, D.: Analysis of the accuracy and robustness of the leap motion controller. Sensors 13(5), 6380–6393 (2013)CrossRef Weichert, F., Bachmann, D., Rudak, B., Fisseler, D.: Analysis of the accuracy and robustness of the leap motion controller. Sensors 13(5), 6380–6393 (2013)CrossRef
Metadaten
Titel
Augmented, Mixed, and Virtual Reality Enabling of Robot Deixis
verfasst von
Tom Williams
Nhan Tran
Josh Rands
Neil T. Dantam
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-91581-4_19

Neuer Inhalt