Skip to content
BY-NC-ND 4.0 license Open Access Published by De Gruyter Open Access July 11, 2018

Understandable robots - What, Why, and How

  • Thomas Hellström EMAIL logo and Suna Bensch

Abstract

As robots become more and more capable and autonomous, there is an increasing need for humans to understand what the robots do and think. In this paper, we investigate what such understanding means and includes, and how robots can be designed to support understanding. After an in-depth survey of related earlier work, we discuss examples showing that understanding includes not only the intentions of the robot, but also desires, knowledge, beliefs, emotions, perceptions, capabilities, and limitations of the robot. The term understanding is formally defined, and the term communicative actions is defined to denote the various ways in which a robot may support a human’s understanding of the robot. A novel model of interaction for understanding is presented. The model describes how both human and robot may utilize a first or higher-order theory of mind to understand each other and perform communicative actions in order to support the other’s understanding. It also describes simpler cases in which the robot performs static communicative actions in order to support the human’s understanding of the robot. In general, communicative actions performed by the robot aim at reducing the mismatch between the mind of the robot, and the robot’s inferred model of the human’s model of the mind of the robot. Based on the proposed model, a set of questions are formulated, to serve as support when developing and implementing the model in real interacting robots.

References

[1] D. Doran, S. Schulz, T. R. Besold, What does explainable AI really mean? A new conceptualization of perspectives, 2017, arXiv:1710.00794 [cs.AI]Search in Google Scholar

[2] A. Chandrasekaran, D. Yadav, P. Chattopadhyay, V. Prabhu, D. Parikh, It takes two to tango: Towards theory of AI’s mind, 2017, arXiv:1704.00717Search in Google Scholar

[3] T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: Concepts, design and applications, Technical Report CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002Search in Google Scholar

[4] S. Bensch, A. Jevtić, T. Hellström, On interaction quality in human-robot interaction, In: International Conference on Agents and Artificial Intelligence (ICAART), 2017, 182-18910.5220/0006191601820189Search in Google Scholar

[5] T. Nomura, K. Kawakami, Relationships between robot’s selfdisclosures and human’s anxiety toward robots, In: Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology, IEEE Computer Society, 2011, 66-6910.1109/WI-IAT.2011.17Search in Google Scholar

[6] G. Baud-Bovy, P. Morasso, F. Nori, G. Sandini, A. Sciutti, Human machine interaction and communication in cooperative actions, In: S. I. Publishing (Ed.), Bioinspired Approaches for Human-Centric Technologies, 2014, 241-26810.1007/978-3-319-04924-3_8Search in Google Scholar

[7] M. J. Gielniak, A. L. Thomaz, Generating anticipation in robot motion, In: 2011 RO-MAN, 2011, 449-45410.1109/ROMAN.2011.6005255Search in Google Scholar

[8] M. Nilsson, S. Thill, T. Ziemke, Action and intention recognition in human interaction with autonomous vehicles, In: Experiencing Autonomous Vehicles: Crossing the Boundaries between a Drive and a Ride, Workshop in conjunction with CHI2015 (2015), 2015Search in Google Scholar

[9] V. M. Lundgren, A. Habibovic, J. Andersson, T. Lagström, M. Nilsson, A. Sirkka, et al., Will there be new communication needs when introducing automated vehicles to the urban context?, In: Advances in Human Aspects of Transportation, 2016, 484, 485-49710.1007/978-3-319-41682-3_41Search in Google Scholar

[10] L. Wang, G. A. Jamieson, J. G. Hollands, Trust and reliance on an automated combat identification system, Human Factors, 2009, 51(3), 281-29110.1177/0018720809338842Search in Google Scholar PubMed

[11] M. M. de Graaf, B. F. Malle, A. Dragan, T. Ziemke, Explainable robotic systems, In: Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’18, New York, NY, USA, ACM, 2018, 387-38810.1145/3173386.3173568Search in Google Scholar

[12] L. Takayama, D. Dooley, W. Ju, Expressing thought: Improving robot readability with animation principles, In: 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2011, 69-7610.1145/1957656.1957674Search in Google Scholar

[13] C. Lichtenthäler, T. Lorenzy, A. Kirsch, Influence of legibility on perceived safety in a virtual human-robot path crossing task, In: 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012, 676-68110.1109/ROMAN.2012.6343829Search in Google Scholar

[14] C. Lichtenthäler, Legibility of Robot Behavior Investigating Legibility of Robot Navigation in Human-Robot Path Crossing Scenarios, PhD thesis, Technische Universität München, 2014Search in Google Scholar

[15] K. Dautenhahn, S. Woods, C. Kaouri, M. Walters, K. L. Koay, I. Werry, What is a robot companion - friend, assistant or butler?, In: Proc. IEEE IRS/RSJ Int. Conference on Intelligent Robots and Systems, Edmonton, Alberta, Canada, 2005, 1488-149310.1109/IROS.2005.1545189Search in Google Scholar

[16] A. D. Dragan, K. C. Lee, S. S. Srinivasa, Legibility and predictability of robot motion, In: Proceedings of the 8th ACM/IEEE International Conference on Human-robot Interaction, HRI ’13, Piscataway, NJ, USA, 2013, 301-308 IEEE Press10.1109/HRI.2013.6483603Search in Google Scholar

[17] J. Novikova, Designing Emotionally Expressive Behaviour: Intelligibility and Predictability in Human-Robot Interaction, PhD thesis, University of Bath, 2016Search in Google Scholar

[18] H. Karvonen, I. Aaltonen, Intent communication of highly autonomous robots, In: The 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2017), 2017Search in Google Scholar

[19] N. Mirnig, M. Tscheligi, Comprehension, coherence and consistency: Essentials of robot feedback, In: J. Markowitz (Ed.), Robots that talk and listen - technology and social impact, De Gruyter, 2015Search in Google Scholar

[20] J. Knifka, On the significance of understanding in human-robot interaction, In: M. Nørskov (Ed.), Social Robots: Boundaries, Potential, Challenges, Ashgate, 2016, 3-17Search in Google Scholar

[21] C. Breazeal, Towards sociable robots, Robotics and autonomous systems, 2002, 4210.1016/S0921-8890(02)00373-1Search in Google Scholar

[22] K. Dautenhahn, The art of designing socially intelligent agents: Science, fiction, and the human in the loop, Applied Artificial Intelligence, 1998, 12(7-8), 573-61710.1080/088395198117550Search in Google Scholar

[23] R. H. Wortham, A. Theodorou, J. J. Bryson, Robot transparency, trust and utility, Connection Science, 2017, 29(3), 242-24810.1080/09540091.2017.1313816Search in Google Scholar

[24] J. B. Lyons, Being transparent about transparency: A model for human-robot interaction, In: Proceedings of AAAI Spring Symposium on Trust in Autonomous Systems, 2013, 48-53Search in Google Scholar

[25] M. P. Anderson, What is communication, The Journal of Communication, 1959, 5(9)Search in Google Scholar

[26] C. E. Shannon, A mathematical theory of communication, Bell System Technical Journal, 1948, 3(27), 379-42310.1002/j.1538-7305.1948.tb01338.xSearch in Google Scholar

[27] D. Chandler, The transmission model of communication, http: //visual-memory.co.uk/daniel/Documents/short/trans.html, 1994, (Accessed: May 20 2018)Search in Google Scholar

[28] W. Schramm, The Beginnings of Communication Study in America, Thousand Oaks, CA: Sage, 1997Search in Google Scholar

[29] C. L. Baker, J. B. Tenenbaum, Modeling human plan recognition using bayesian theory of mind, In: Plan, activity, and intent recognition: Theory and practice, 2014, 177-20410.1016/B978-0-12-398532-3.00007-5Search in Google Scholar

[30] S. Baron-Cohen,Mindblindness: An essay on autism and theory of mind, MIT Press, Cambridge, 199510.7551/mitpress/4635.001.0001Search in Google Scholar

[31] D. G. Premack, G. Woodruff, Does the chimpanzee have a theory of mind?, Behavioral and Brain Sciences, 1978, 1(4), 515-52610.1017/S0140525X00076512Search in Google Scholar

[32] M. Michlmayr Simulation theory versus theory theory: Theories concerning the ability to read minds,Master’s thesis, University of Innsbruck, 2002Search in Google Scholar

[33] P. M. Churchland, Folk psychology and the explanation of human behavior, In: J. D. Greenwood (Ed.), The future of folk psychology, Cambridge University Press, Cambridge, 1991, 51-6910.1017/CBO9780511551659.003Search in Google Scholar

[34] R. Verbrugge, L. Mol, Learning to apply theory of mind, Journal of Logic, Language and Information, 2008, 4(17), 489-51110.1007/s10849-008-9067-4Search in Google Scholar

[35] B. Hare, J. Call, M. Tomasello, Do chimpanzees know what conspecifics know and do not know?, Animal Behaviour, 2001, 61(1), 139-15110.1006/anbe.2000.1518Search in Google Scholar

[36] T. Bugnyar, S. A. Reber, C. Buckner, Ravens attribute visual access to unseen competitors, Nature Communications, 2015, 710.1038/ncomms10506Search in Google Scholar

[37] L. M. Hiatt, J. G. Trafton, A cognitive model of theory of mind, In: International Conference on Cognitive Modeling, 2010Search in Google Scholar

[38] L. Barlassina, R. M. Gordon, Folk psychology as mental simulation, In: E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer Edition), 2017Search in Google Scholar

[39] M. A. Arbib, A. Billard, M. Iacoboni, E. Oztop, Synthetic brain imaging: Grasping, mirror neurons and imitation, Neural Networks, 2000, 13, 975-99710.1016/S0893-6080(00)00070-8Search in Google Scholar

[40] A. I. Goldman, Theory of mind, In: E. Margolis, R. Samuels, S. P. Stich (Ed.), The Oxford Handbook of Philosophy of Cognitive Science, 201210.1093/oxfordhb/9780195309799.013.0017Search in Google Scholar

[41] P. Carruthers, Simulation and self-knowledge: a defence of theory-theory, In: P. Carruthers, P. R. Smith (Ed.), Theories of theories of mind, Cambridge University Press, Cambridge, 1996, 22-3810.1017/CBO9780511597985.004Search in Google Scholar

[42] G. Gergely, Z. Nfidasdy, G. Csibra, S. Biro, Taking the intentional stance at 12 months of age, Cognition, 1995, 56, 165-19310.1016/0010-0277(95)00661-HSearch in Google Scholar

[43] E. Bonchek-Dokow, Cognitive Modeling of Human Intention Recognition, PhD thesis, Bar Ilan University, 2012Search in Google Scholar

[44] R. Chadalavada, H. Andreasson, R. Krug, A. J. Lilienthal, That’s on my mind! robot to human intention communication through on-board projection on shared floor space, In: Proceedings of European Conference on Mobile Robots, 201510.1109/ECMR.2015.7403771Search in Google Scholar

[45] S. Augustsson, J. Olsson, L. G. Christiernin, G. Bolmsjö, How to transfer information between collaborating human operators and industrial robots in an assembly, In: Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational, NordiCHI ’14, New York, USA, ACM, 2014, 286-29410.1145/2639189.2639243Search in Google Scholar

[46] M. Matthews, G. V. Chowdhary, Intent communication between autonomous vehicles and pedestrians, In: Robotics: Science and Systems, 2015Search in Google Scholar

[47] K. Kobayashi, S. Yamada, Making a mobile robot to express its mind by motion overlap, In: V. A. Kulyukin (Ed.), Advances in Human-Robot Interaction, InTech, 200910.5772/6829Search in Google Scholar

[48] D. Dennett, The Intentional Stance, MIT Press, Cambridge, 1987Search in Google Scholar

[49] F. Hegel, S. Krach, T. Kircher, B. Wrede, G. Sagerer, Theory of mind (ToM) on robots: A functional neuroimaging study, In: HRI’08, Netherlands, 2008, 335-34210.1145/1349822.1349866Search in Google Scholar

[50] X. Zhao, C. Cusimano, B. F. Malle, Do people spontaneously take a robot’s visual perspective?, In: HRI ’16: The Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction, 201610.1145/2701973.2702044Search in Google Scholar

[51] S. lai Lee, I. Y. man Lau, S. Kiesler, C.-Y. Chiu, Human mental models of humanoid robots, In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation ICRA 2005, IEEE, 2005, 2767- 2772Search in Google Scholar

[52] T. Nakata, T. Sato, T. Mori, Expression of emotion and intention by robot body movement, In: Proc. of the Intl. Conf. on Autonomous Systems, 1998Search in Google Scholar

[53] A. Zhou, D. Hadfield-Menell, A. Nagabandi, A. D. Dragan, Expressive robot motion timing, In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017, Vienna, Austria, March 6-9 2017, 2017, 22-3110.1145/2909824.3020221Search in Google Scholar

[54] T. Ono, M. Imai, R. Nakatsu, Reading a robot’s mind: A model of utterance understanding based on the theory of mind mechanism, Advanced Robotics, 2000, 14(4), 311-32610.1163/156855300741609Search in Google Scholar

[55] S. H. Huang, D. Held, P. Abbeel, A. D. Dragan, Enabling Robots to Communicate their Objectives, ArXiv e-prints, 201710.15607/RSS.2017.XIII.059Search in Google Scholar

[56] H. Kautz, A Formal Theory of Plan Recognition, PhD thesis, University of Rochester, 1987Search in Google Scholar

[57] E. Charniak, R. P. Goldman, A Bayesian model of plan recognition, Artificial Intelligence, 1993, 64(1), 53-7910.1016/0004-3702(93)90060-OSearch in Google Scholar

[58] M. Vilain, Getting serious about parsing plans: A grammatical analysis of plan recognition, In: Proceedings of National Conference on Artificial Intelligence, 1990Search in Google Scholar

[59] H. H. Bui, S. Venkatesh, G. West, Policy recognition in the abstract hidden Markov model, Journal of Artificial Intelligence Research, 2002, 17, 451-49910.1613/jair.839Search in Google Scholar

[60] E. A. Billing, T. Hellström, A formalism for learning from demonstration, Paladyn, Journal of Behavioral Robotics, 2010, 1(1), 1-1310.2478/s13230-010-0001-5Search in Google Scholar

[61] E. A. Billing, T. Hellström, L. E. Janlert, Behavior recognition for learning from demonstration, In: Proceedings of IEEE International Conference on Robotics and Automation, Alaska, Anchorage, 2010, 866-87210.1109/ROBOT.2010.5509912Search in Google Scholar

[62] A. Billard, S. Calinon, R. Dillmann, S. Schaal, Robot Programming by Demonstration, Springer, 200810.1007/978-3-540-30301-5_60Search in Google Scholar

[63] C. L. Nehaniv, K. Dautenhahn, Of hummingbirds and helicopters: An algebraic framework for interdisciplinary studies of imitation and its applications, World Scientific Press, 2000, 24, 136-16110.1142/9789812792747_0007Search in Google Scholar

[64] B. Jansen, T. Belpaeme, A computational model of intention reading in imitation, Robotics and autonomous systems, 2005, 54, 394-40210.1016/j.robot.2006.01.006Search in Google Scholar

[65] S. Trott, M. Eppe, J. Feldman, Recognizing intention from natural language: Clarification dialog and construction grammar, In: Proceedings of 2016 Workshop on Communicating Intentions in Human-Robot Interactionnd Systems in NYC, NY, 2016Search in Google Scholar

[66] A. Sutherland, S. Bensch, T. Hellström, Inferring robot actions from verbal commands using shallow semantic parsing, In: H. Arabnia (Ed.), Proceedings of the 17th International Conference on Artificial Intelligence ICAI’15, 2015, 28-34Search in Google Scholar

[67] A. Rasouli, L. Kotseruba, J. K. Tsotsos, Agreeing to cross: How drivers and pedestrians communicate, In: Proceedings of the IEEE Intelligent Vehicles Symposium (IV), 201710.1109/IVS.2017.7995730Search in Google Scholar

[68] B. Scassellati, Theory of mind for a humanoid robot, Autonomous Robots, 2002, 12, 1310.1023/A:1013298507114Search in Google Scholar

[69] A. M. Leslie, Tomm, toby, and agency: Core architecture and domain specificity, In: l. A. Hirschfeld, S. A. Gelman (Ed.),Mapping the Mind: Domain Specificity in Cognition and Culture, Cambridge University Press, Cambridge, 1994, 119-148Search in Google Scholar

[70] B. Benninghoff, P. Kulms, L. Hoffmann, N. Krämer, Theory of mind in human-robot-communication: Appreciated or not?, Kognitive System, 2013, 1Search in Google Scholar

[71] M. Berlin, J. Gray, A. L. Thomaz, C. Breazeal, Perspective taking: An organizing principle for learning in human-robot interaction, In: Nat. Conf. on Artificial Intelligence, vol. 21. AAAI Press, MIT Press, 2006Search in Google Scholar

[72] G. Milliez, M. Warnier, A. Clodic, R. Alami, A framework for endowing an interactive robot with reasoning capabilities about perspective-taking and belief management, In: Int. Symp. on Robot and Human Interactive Communication, IEEE, 2014, 1103-110910.1109/ROMAN.2014.6926399Search in Google Scholar

[73] K.-J. Kim, H. Lipson, Towards a simple robotic theory of mind, In: Proceedings of the 9thWorkshop on Performance Metrics for Intelligent Systems (PerMIS09), New York, USA, ACM, 2009, 131-13810.1145/1865909.1865937Search in Google Scholar

[74] S. Devin, R. Alami, An implemented theory of mind to improve human-robot shared plans execution, In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (HRI16), Piscataway, NJ, USA, IEEE, 2016, 319-32610.1109/HRI.2016.7451768Search in Google Scholar

[75] L. M. Hiatt, A. M. Harrison, J. G. Trafton, Accommodating human variability in human-robot teams through theory of mind, In: IJCAI International Joint Conference on Artificial Intelligence, 2011, 2066-2071Search in Google Scholar

[76] C. Bereiter, Education and mind in the Knowledge Age, L. Erlbaum Associates, 2002Search in Google Scholar

[77] D. Vernon, S. Thill, T. Ziemke, The role of intention in cognitive robotics, In: A. Esposito, L. C. Jain (Ed.), Toward Robotic Socially Believable Behaving Systems - Volume I, pages 15-27. Springer International Publishing, 201610.1007/978-3-319-31056-5_3Search in Google Scholar

[78] S. Thrun, J. Schulte, C. Rosenberg, Robots with humanoid features in public places: A case study, IEEE Intelligent Systems archive, 2000, 15(4), 7-11Search in Google Scholar

[79] F. Stulp, J. Grizou, B. Busch, M. Lopes, Facilitating intention prediction for humans by optimizing robot motions, In: International Conference on Intelligent Robots and Systems (IROS), 201510.1109/IROS.2015.7353529Search in Google Scholar

[80] H. Romat, M.-A. Williams, X. Wang, B. Johnston, H. Bard, Natural human-robot interaction using social cues, In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, HRI ’16, Piscataway, NJ, USA, IEEE Press, 2016, 503-50410.1109/HRI.2016.7451827Search in Google Scholar

[81] J. Hough, D. Schlangen, It’s not what you do, it’s how you do it: Grounding uncertainty for a simple robot, In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI ‘17), 201710.1145/2909824.3020214Search in Google Scholar

[82] K. Baraka, M. M. Veloso, Mobile service robot state revealing through expressive lights: Formalism, design, and evaluation, Journal of Social Robotics, 201710.1007/s12369-017-0431-xSearch in Google Scholar

[83] B. Kühnlenz, S. Sosnowski, M. Buß, D. Wollherr, K. Kühnlenz, M. Buss, Increasing helpfulness towards a robot by emotional adaption to the user, Int J Soc Robot, 2013, 5(4), 457-47610.1007/s12369-013-0182-2Search in Google Scholar

[84] A. Moon, B. Panton, M. V. der Loos, E. Croft, Using hesitation gestures for safe and ethical human-robot interaction, In: IEEE ICRA’10 Workshop on Interactive Communication for Autonomous Intelligent Robots, 2010, 11-13Search in Google Scholar

[85] R. A. Knepper, On the communicative aspect of human-robot joint action, In: IEEE International Symposium on Robot and Human Interactive Communication Workshop: Toward a Framework for Joint Action, What about Common Ground?, New York, NY, USA, 2016Search in Google Scholar

[86] R. A. Knepper, C. I. Mavrogiannis, J. Proft, C. Liang, Implicit communication in a joint action, In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’17, pages 283-292, New York, NY, USA, ACM, 201710.1145/2909824.3020226Search in Google Scholar

[87] A. Sciutti, G. Sandini, Interacting with robots to investigate the bases of social interaction, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2017, 25, 2295-230410.1109/TNSRE.2017.2753879Search in Google Scholar PubMed

[88] C. Breazeal, A. Edsinger, P. Fitzpatrick, B. Scassellati, Active vision systems for sociable robots, IEEE Trans. Syst.Man Cybern., 2001, 31, 443-45310.1109/3468.952718Search in Google Scholar

[89] A. Watanabe, T. Ikeda, Y. Morales, Communicating robotic navigational intentions, In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 201510.1109/IROS.2015.7354195Search in Google Scholar

[90] R. T. Azuma, A survey of augmented reality, Presence, 1997, 6(4), 355-38510.1162/pres.1997.6.4.355Search in Google Scholar

[91] J. Carff, M. Johnson, E. M. El-Sheikh, J. E. Pratt, Human-robot team navigation in visually complex environments, In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), 2009, 3043-305010.1109/IROS.2009.5354321Search in Google Scholar

[92] C. Breazeal, P. Fitzpatrick, That certain look: Social-amplification of animate vision, In: AAAI 2000 Fall Symposium, 2000, 18-22Search in Google Scholar

[93] F. Broz, A. Di Nuovo, T. Belpaeme, A. Cangelosi, Talking about task progress: Towards integrating task planning and dialog for assistive robotic services, Paladyn, Journal of Behavioral Robotics, 2015, 6(1), 111-11810.1515/pjbr-2015-0007Search in Google Scholar

[94] R. Kelley, A. Tavakkoli, C. King, M. Nicolescu, M. Nicolescu, Understanding activities and intentions for human-robot interaction, In: D. Chugo (Ed.), Advances in Human-Robot Interaction, In-Tech, 2010, 288-30510.5772/8127Search in Google Scholar

[95] H. Knight, R. Simmons, Layering laban effort features on robot task motions, In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, New York, NY, USA, ACM, 2015, 135-13610.1145/2701973.2702054Search in Google Scholar

[96] J. G. Trafton, N. L. Cassimatis, M. D. Bugajska, D. P. Brock, F. E. Mintz, A. C. Schultz, Enabling effective human-robot interaction using perspective-taking in robots, Systems,Man and Cybernetics, 2005, 35(4), 460-47010.1109/TSMCA.2005.850592Search in Google Scholar

Received: 2017-12-14
Accepted: 2018-05-17
Published Online: 2018-07-11

© 2018 Thomas Hellström and Suna Bensch

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.

Downloaded on 3.6.2024 from https://www.degruyter.com/document/doi/10.1515/pjbr-2018-0009/html
Scroll to top button