Skip to main content
Erschienen in: International Journal of Social Robotics 5/2016

01.11.2016

Human Visual Attention Model Based on Analysis of Magic for Smooth Human–Robot Interaction

verfasst von: Yusuke Tamura, Takafumi Akashi, Shiro Yano, Hisashi Osumi

Erschienen in: International Journal of Social Robotics | Ausgabe 5/2016

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In order to smoothly interact with humans, it is desirable that a robot can guide human attention and behaviors. In this study, we developed a model of human visual attention for guiding human attention based on an analysis of a magic trick performance. We measured human gaze points of people watching a video of a magic trick performance and compared them with the area where the magician intended to draw a spectator’s attention. The analysis showed that the relationship between the magician’s face, hands, and gaze plays an important role in guiding the spectator’s attention. On the basis of the preliminary user studies on watching the magic video, we integrated a saliency map and a manipulation map that describes the relationship between gaze and hands to develop a novel human attention model. The evaluation using the observed gaze points demonstrated that the proposed model can better explain human visual attention than the saliency map while people are watching a video of a magic trick performance.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Frith CD, Frith U (2006) How we predict what other people are going to do. Brain Res 1079:36–46CrossRef Frith CD, Frith U (2006) How we predict what other people are going to do. Brain Res 1079:36–46CrossRef
2.
Zurück zum Zitat Sato T, Nishida Y, Ichikawa J, Hatamura Y, Mizoguchi H (1994) Active understanding of human intention by a robot through monitoring of human behavior. In: Proceedings of the IEEE/RSJ/GI international conference on intelligent robots and systems ’94, pp 405–414 Sato T, Nishida Y, Ichikawa J, Hatamura Y, Mizoguchi H (1994) Active understanding of human intention by a robot through monitoring of human behavior. In: Proceedings of the IEEE/RSJ/GI international conference on intelligent robots and systems ’94, pp 405–414
3.
Zurück zum Zitat Sakita K, Ogawara K, Murakami S, Kawamura K, Ikeuchi K (2004) Flexible cooperation between human and robot by interpreting human intention from gaze information. In: Proceedings of the 2004 IEEE/RSJ international conference on intelligent robots and systems, pp 846–851 Sakita K, Ogawara K, Murakami S, Kawamura K, Ikeuchi K (2004) Flexible cooperation between human and robot by interpreting human intention from gaze information. In: Proceedings of the 2004 IEEE/RSJ international conference on intelligent robots and systems, pp 846–851
4.
Zurück zum Zitat Tamura Y, Sugi M, Ota J, Arai T (2004) Deskwork support system based on the estimation of human intentions. In: Proceedings of the 13th IEEE international workshop on robot and human interactive communication, pp 413–418 Tamura Y, Sugi M, Ota J, Arai T (2004) Deskwork support system based on the estimation of human intentions. In: Proceedings of the 13th IEEE international workshop on robot and human interactive communication, pp 413–418
5.
Zurück zum Zitat Tahboub KA (2006) Intelligent human-machine interaction based on dynamic bayesian networks probabilistic intention recognition. J Intell Robot Syst 45(1):31–52CrossRef Tahboub KA (2006) Intelligent human-machine interaction based on dynamic bayesian networks probabilistic intention recognition. J Intell Robot Syst 45(1):31–52CrossRef
6.
Zurück zum Zitat Schmid AJ, Weede O, Worn H (2007) Proactive robot task selection given an human intention estimate. In: Proceedings of the 16th IEEE international symposium on robot and human interactive communication, pp 726–731 Schmid AJ, Weede O, Worn H (2007) Proactive robot task selection given an human intention estimate. In: Proceedings of the 16th IEEE international symposium on robot and human interactive communication, pp 726–731
7.
Zurück zum Zitat Breazeal C, Kidd CD, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: Proceedings of the 2005 IEEE/RSJ international conference on intelligent robots and systems, pp 708–713 Breazeal C, Kidd CD, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: Proceedings of the 2005 IEEE/RSJ international conference on intelligent robots and systems, pp 708–713
8.
Zurück zum Zitat Hegel F, Gieselmann S, Peters A, Holthaus P, Wrede B (2011) Towards a typology of meaningful signals and cues in social robotics. In: Proceedings of the 20th IEEE international symposium on robot and human interactive communication, pp 72–78 Hegel F, Gieselmann S, Peters A, Holthaus P, Wrede B (2011) Towards a typology of meaningful signals and cues in social robotics. In: Proceedings of the 20th IEEE international symposium on robot and human interactive communication, pp 72–78
9.
Zurück zum Zitat Fong T, Thorpe C, Bauer C (2003) Collaboration, dialogue, and human-robot interaction. Robot Res, Springer Tracts Adv Robot 6:255–266CrossRef Fong T, Thorpe C, Bauer C (2003) Collaboration, dialogue, and human-robot interaction. Robot Res, Springer Tracts Adv Robot 6:255–266CrossRef
10.
Zurück zum Zitat Finzi A, Orlandini A (2005) Human-robot interaction through mixed-initiative planning for rescue and search rovers. AI*IA 2005: advances in artificial intelligence. Lect Notes Comput Sci 3673:483–494 Finzi A, Orlandini A (2005) Human-robot interaction through mixed-initiative planning for rescue and search rovers. AI*IA 2005: advances in artificial intelligence. Lect Notes Comput Sci 3673:483–494
11.
Zurück zum Zitat Hong J-H, Song Y-S, Cho S-B (2007) Mixed-initiative human-robot interaction using hierarchical bayesian networks. IEEE Trans Syst Man Cybern—Part A 37(6):1158–1164CrossRef Hong J-H, Song Y-S, Cho S-B (2007) Mixed-initiative human-robot interaction using hierarchical bayesian networks. IEEE Trans Syst Man Cybern—Part A 37(6):1158–1164CrossRef
12.
Zurück zum Zitat Sugiyama O, Kanda T, Imai M, Ishiguro H, Hagita N, Anzai Y (2006) Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model. Connect Sci 18(4):379–402CrossRef Sugiyama O, Kanda T, Imai M, Ishiguro H, Hagita N, Anzai Y (2006) Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model. Connect Sci 18(4):379–402CrossRef
13.
Zurück zum Zitat Friesen CK, Kingstone A (1998) The eyes have it! reflexive orienting is triggered by nonpredictive gaze. Psychon Bull Rev 5(3):490–495CrossRef Friesen CK, Kingstone A (1998) The eyes have it! reflexive orienting is triggered by nonpredictive gaze. Psychon Bull Rev 5(3):490–495CrossRef
14.
Zurück zum Zitat Driver J, Davis G, Ricciardelli P, Kidd P, Maxwell E, Baron-Cohen S (1999) Gaze perception triggers reflexive visuospatial orienting. Vis Cogn 6(5):509–540CrossRef Driver J, Davis G, Ricciardelli P, Kidd P, Maxwell E, Baron-Cohen S (1999) Gaze perception triggers reflexive visuospatial orienting. Vis Cogn 6(5):509–540CrossRef
15.
Zurück zum Zitat Langton S, Bruce V (1999) Reflexive visual orienting in response to the social attention of others. Vis Cogn 6:541–568CrossRef Langton S, Bruce V (1999) Reflexive visual orienting in response to the social attention of others. Vis Cogn 6:541–568CrossRef
16.
Zurück zum Zitat Langton SR, Watt RJ, Bruce V (2000) Do the eyes have it? Cues to the direction of social attention. Trends Cogn Sci 4(2):50–59CrossRef Langton SR, Watt RJ, Bruce V (2000) Do the eyes have it? Cues to the direction of social attention. Trends Cogn Sci 4(2):50–59CrossRef
17.
Zurück zum Zitat Hoque MM, Onuki T, Kobayashi Y, Kuno Y (2013) Effect of robot’s gaze behaviors for attracting and controlling human attention. Adv Robot 27(11):813–829CrossRef Hoque MM, Onuki T, Kobayashi Y, Kuno Y (2013) Effect of robot’s gaze behaviors for attracting and controlling human attention. Adv Robot 27(11):813–829CrossRef
18.
Zurück zum Zitat Zheng M, Moon A, Croft EA, Meng MQH (2015) Impacts of robot head gaze on robot-to-human handovers. Int J Soc Robot 7:783–798CrossRef Zheng M, Moon A, Croft EA, Meng MQH (2015) Impacts of robot head gaze on robot-to-human handovers. Int J Soc Robot 7:783–798CrossRef
19.
Zurück zum Zitat Koch C, Ullman S (1985) Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol 4:219–227 Koch C, Ullman S (1985) Shifts in selective visual attention: towards the underlying neural circuitry. Hum Neurobiol 4:219–227
20.
Zurück zum Zitat Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1258CrossRef Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1258CrossRef
21.
Zurück zum Zitat Itti L, Koch C (2001) Computational modelling of visual attention. Nat Rev Neurosci 2(3):194–203CrossRef Itti L, Koch C (2001) Computational modelling of visual attention. Nat Rev Neurosci 2(3):194–203CrossRef
22.
Zurück zum Zitat Schillaci G, Bodiroža S, Hafner VV (2013) Evaluating the effect of saliency detection and attention manipulation in human-robot interaction. Int J Soc Robot 5:139–152CrossRef Schillaci G, Bodiroža S, Hafner VV (2013) Evaluating the effect of saliency detection and attention manipulation in human-robot interaction. Int J Soc Robot 5:139–152CrossRef
23.
Zurück zum Zitat Ma Y-F, Zhang H-J (2002) A model of motion attention for video skimming. In: Proceedings of the 2002 international conference on image processing, pp 129–132 Ma Y-F, Zhang H-J (2002) A model of motion attention for video skimming. In: Proceedings of the 2002 international conference on image processing, pp 129–132
24.
Zurück zum Zitat Lang M, Hornung A, Wang O, Poulakos S, Smolic A, Gross M (2010) Nonlinear disparity mapping for stereoscopic 3D. ACM Trans Graph 29(4):75CrossRef Lang M, Hornung A, Wang O, Poulakos S, Smolic A, Gross M (2010) Nonlinear disparity mapping for stereoscopic 3D. ACM Trans Graph 29(4):75CrossRef
25.
Zurück zum Zitat Cerf M, Harel J, Einhäuser W, Koch C (2008) Predicting human gaze using low-level saliency combined with face detection. Adv Neural Inf Process Syst 20:241–248 Cerf M, Harel J, Einhäuser W, Koch C (2008) Predicting human gaze using low-level saliency combined with face detection. Adv Neural Inf Process Syst 20:241–248
26.
Zurück zum Zitat Ozeki M, Kashiwagi Y, Inoue M, Oka N (2011) Top-down visual attention control based on a particle filter for human-interactive robots. In: Proceedings of the 4th international conference on human system interaction, pp 188–194 Ozeki M, Kashiwagi Y, Inoue M, Oka N (2011) Top-down visual attention control based on a particle filter for human-interactive robots. In: Proceedings of the 4th international conference on human system interaction, pp 188–194
27.
Zurück zum Zitat Lamont P, Wiseman R (1999) Magic in theory—an introduction to the theoretical and psychological elements of conjuring. University of Hertfordshire Press, Hatfield Lamont P, Wiseman R (1999) Magic in theory—an introduction to the theoretical and psychological elements of conjuring. University of Hertfordshire Press, Hatfield
28.
Zurück zum Zitat Kuhn G, Amlani AA, Rensink R (2008) Towards a science of magic. Trends Cogn Sci 12(9):349–354CrossRef Kuhn G, Amlani AA, Rensink R (2008) Towards a science of magic. Trends Cogn Sci 12(9):349–354CrossRef
29.
Zurück zum Zitat Macknik SL, King M, Randi J, Robbins A, Teller Thompson J, Martinez-Conde S (2008) Attention and awareness in stage magic: turning tricks into research. Nat Rev Neurosci 9(11):871–879CrossRef Macknik SL, King M, Randi J, Robbins A, Teller Thompson J, Martinez-Conde S (2008) Attention and awareness in stage magic: turning tricks into research. Nat Rev Neurosci 9(11):871–879CrossRef
30.
Zurück zum Zitat Kuhn G, Tatler BW, Findlay JM, Cole GG (2008) Misdirection in magic: implications for the relationship between eye gaze and attention. Vis Cogn 16:391–405CrossRef Kuhn G, Tatler BW, Findlay JM, Cole GG (2008) Misdirection in magic: implications for the relationship between eye gaze and attention. Vis Cogn 16:391–405CrossRef
31.
Zurück zum Zitat Kuhn G, Tatler BW, Cole GG (2009) You look where I look! Effect of gaze cues on overt and covert attention in misdirection. Vis Cogn 17:925–944CrossRef Kuhn G, Tatler BW, Cole GG (2009) You look where I look! Effect of gaze cues on overt and covert attention in misdirection. Vis Cogn 17:925–944CrossRef
32.
Zurück zum Zitat Tamariz J (2007) The five points in magic. Hermetic Press, Seattle Tamariz J (2007) The five points in magic. Hermetic Press, Seattle
33.
Zurück zum Zitat Otero-Millan J, Macknik SL, Robbins A, Martinez-Conde S (2011) Stronger misdirection in curved than in straight motion. Front Hum Neurosci 5(133):1–4 Otero-Millan J, Macknik SL, Robbins A, Martinez-Conde S (2011) Stronger misdirection in curved than in straight motion. Front Hum Neurosci 5(133):1–4
34.
35.
Zurück zum Zitat Kowler E, Anderson E, Dosher B, Blaser E (1995) The role of attention in the programming of saccades. Vis Res 35(13):1897–1916CrossRef Kowler E, Anderson E, Dosher B, Blaser E (1995) The role of attention in the programming of saccades. Vis Res 35(13):1897–1916CrossRef
36.
Zurück zum Zitat Henderson JM (2003) Human gaze control during real-world scene perception. Trends Cogn Sci 7(11):498–504 Henderson JM (2003) Human gaze control during real-world scene perception. Trends Cogn Sci 7(11):498–504
37.
Zurück zum Zitat Akashi T, Tamura Y, Yano S, Osumi H (2013) Analysis of manipulating other’s attention for smooth interaction between human and robot. In: Proceedings of the 2013 IEEE/SICE international symposium on system integration, pp 340–345 Akashi T, Tamura Y, Yano S, Osumi H (2013) Analysis of manipulating other’s attention for smooth interaction between human and robot. In: Proceedings of the 2013 IEEE/SICE international symposium on system integration, pp 340–345
38.
Zurück zum Zitat Posner MI, Snyder CRR, Davidson BJ (1980) Attention and detection of signals. J Exp Psychol 109(2):160–174CrossRef Posner MI, Snyder CRR, Davidson BJ (1980) Attention and detection of signals. J Exp Psychol 109(2):160–174CrossRef
39.
Zurück zum Zitat Tamura Y, Yano S, Osumi H (2014) Visual attention model for manipulating human attention by a robot. In: Proceedings of the 2014 IEEE international conference on robotics and automation, pp 5307–5312 Tamura Y, Yano S, Osumi H (2014) Visual attention model for manipulating human attention by a robot. In: Proceedings of the 2014 IEEE international conference on robotics and automation, pp 5307–5312
40.
Zurück zum Zitat Farnebäck G (2003) Two-frame motion estimation based on polynomial expansion. In: Proceedings of the 13th scandinavian conference on image analysis, pp 363–370 Farnebäck G (2003) Two-frame motion estimation based on polynomial expansion. In: Proceedings of the 13th scandinavian conference on image analysis, pp 363–370
41.
Zurück zum Zitat Navalpakkam V, Itti L (2005) Modeling the influence of task on attention. Vis Res 45:205–231CrossRef Navalpakkam V, Itti L (2005) Modeling the influence of task on attention. Vis Res 45:205–231CrossRef
42.
Zurück zum Zitat Rothkopf CA, Ballard DH, Hayhoe MM (2007) Task and context determine where you look. J Vis 7(14):16, 1–20 Rothkopf CA, Ballard DH, Hayhoe MM (2007) Task and context determine where you look. J Vis 7(14):16, 1–20
Metadaten
Titel
Human Visual Attention Model Based on Analysis of Magic for Smooth Human–Robot Interaction
verfasst von
Yusuke Tamura
Takafumi Akashi
Shiro Yano
Hisashi Osumi
Publikationsdatum
01.11.2016
Verlag
Springer Netherlands
Erschienen in
International Journal of Social Robotics / Ausgabe 5/2016
Print ISSN: 1875-4791
Elektronische ISSN: 1875-4805
DOI
https://doi.org/10.1007/s12369-016-0354-y

Weitere Artikel der Ausgabe 5/2016

International Journal of Social Robotics 5/2016 Zur Ausgabe

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.