Weitere Kapitel dieses Buchs durch Wischen aufrufen
One of the main aims of humanoid robotics is to develop robots that are capable of interacting naturally with people. However, to understand the essence of human interaction, it is crucial to investigate the contribution of behavior and appearance. Our group’s research explores these relationships by developing androids that closely resemble human beings in both aspects. If humanlike appearance causes us to evaluate an android’s behavior from a human standard, we are more likely to be cognizant of deviations from human norms. Therefore, the android’s motions must closely match human performance to avoid looking strange, including such autonomic responses as the shoulder movements involved in breathing. This paper proposes a method to implement motions that look human by mapping their three-dimensional appearance from a human performer to the android and then evaluating the verisimilitude of the visible motions using a motion capture system. Previous research has focused on copying and moving joint angles from a person to a robot. Our approach has several advantages: (1) in an android robot with many degrees of freedom and kinematics that differ from that of a human being, it is difficult to calculate which joint angles would make the robot’s posture appear similar to the human performer; and (2) the motion that we perceive is at the robot’s surface, not necessarily at its joints, which are often hidden from view.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
Matsui, Daisuke, Takashi Minato, Karl F. MacDorman, and Hiroshi Ishiguro. 2005. Generating natural motion in an android by mapping human motion. In 2005 IEEE/RSJ international conference on intelligent robots and systems, 2005. (IROS 2005), 3301–3308. IEEE.
Kanda, T., H. Ishiguro, T. Ono, M. Imai, and K. Mase. 2002. Development and evaluation of an interactive robot “Robovie”. In Proceedings of the IEEE international conference on robotics and automation, 1848–1855.
Goetz, J., S. Kiesler, and A. Powers. 2003. Matching robot appearance and behavior to tasks to improve human-robot cooperation. In Proceedings of the workshop on robot and human interactive communication, 55–60.
DiSalvo, C.F., F. Gemperle, J. Forlizzi, and S. Kiesler. 2002. All robots are not created equal: The design and perception of humanoid robot heads. In Proceedings of the symposium on designing interactive systems, 321–326.
Grill-Spector, K., N. Knouf, and N. Kanwisher. 2004. The fusiform face area subserves face perception, not generic within-category identification. Nature Neuroscience 7 (5): 555–562. CrossRef
Farah, M.J., C. Rabinowitz, G.E. Quinn, and G.T. Liu. 2000. Early commitment of neural substrates for face recognition. Cognitive Neuropsychology 17: 117–123. CrossRef
Carmel, D., and S. Bentin. 2002. Domain specificity versus expertise: Factors influencing distinct processing of faces. Cognition 83: 1–29. CrossRef
Minato, T., K.F. MacDorman, M. Shimada, S. Itakura, K. Lee, and H. Ishiguro. 2004. Evaluating humanlikeness by comparing responses elicited by an android and a person. In Proceedings of the 2nd international workshop on man-machine symbiotic systems, 373–383.
Okada, M., S. Ban, and Y. Nakamura. 2002. Skill of compliance with controlled charging/discharging of kinetic energy. In Proceeding of the IEEE international conference on robotics and automation, 2455–2460.
Yoshikai, T., I. Mizuuchi, D. Sato, S. Yoshida, M. Inaba, and H. Inoue. 2003. Behavior system design and implementation in spined musle-tendon humanoid “Kenta”. Journal of Robotics and Mechatronics 15 (2): 143–152. CrossRef
Kashima, T., and Y. Isurugi. 1998. Trajectory formation based on physiological characteristics of skeletal muscles. Biological Cybernetics 78 (6): 413–422. CrossRef
Riley, M., A. Ude, and C.G. Atkeson. 2000. Methods for motion generation and interaction with a humanoid robot: Case studies of dancing and catching. In Proceedings of AAAI and CMU workshop on interactive robotics and entertainment.
Nakaoka, S., A. Nakazawa, K. Yokoi, H. Hirukawa, and K. Ikeuchi. 2003. Generating whole body motions for a biped humanoid robot from captured human dances. In Proceedings of the 2003 IEEE international conference on robotics and automation.
Hale, J.G., F.E. Pollick, and M. Tzoneva. 2003. The visual categorization of humanoid movement as natural. In Proceedings of the third IEEE international conference on humanoid robotics.
Gleicher, Michael. Retargestting motion to new characters. 1998. In Proceedings of the 25th annual conference on computer graphics and interactive techniques, SIGGRAPH ’98, New York, NY, USA, 33–42. ACM.
Mori, M. 1970. Bukimi no tani [the uncanny valley] (in Japanese). Energy 7 (4): 33–35.
Fong, T., I. Nourbakhsh, and K. Dautenhahn. 2003. A survey of socially interactive robots. Robotics and Autonomous Systems 42: 143–166. CrossRef
Minato, T., M. Shimada, H. Ishiguro, and S. Itakura. 2004. Development of an android robot for studying human-robot interaction. In Proceedings of the 17th international conference on industrial & engineering applications of artificial intelligence & expert systems, 424–434. CrossRef
Kawato, M., K. Furukawa, and R. Suzuki. 1987. A hierarchical neural network model for control and learning of voluntary movement. Biological Cybernetics 57: 169–185. CrossRef
Grood, E.S., and W.J. Suntay. 1983. A joint coordinate system for the clinical description of three-dimensional motions: application to the knee. Journal of Biomechanical Engineering 105: 136–144. CrossRef
Challis, J.H. 1995. A procedure for determining rigid body transformation parameters. Journal of Biomechanics 28: 733–737. CrossRef
Veldpaus, F.E., H.J. Woltring, and L.J.M.G. Dortmans. 1988. A least squares algorithm for the equiform transformation from spatial marker co-ordinates. Journal of Biomechanics 21: 45–54. CrossRef
Oyama, E., N.Y. Chong, A. Agah, T. Maeda, S. Tachi, and K.F. MacDorman. 2001. Learning a coordinate transformation for a human visual feedback controller based on disturbance noise and the feedback error signal. In Proceedings of the IEEE international conference on robotics and automation.
Oyama, E., K.F. MacDorman, A. Agah, T. Maeda, and S. Tachi. 2001. Coordinate transformation learning of a hand position feedback controller with time delay. Neurocomputing, 38–40(1–4).
Oyama, E., A. Agah, K.F. MacDorman, T. Maeda, and S. Tachi. 2001. A modular neural network architecture for inverse kinematics model learning. Neurocomputing 38–40 (1–4): 797–805. CrossRef
Leardini, Alberto, Lorenzo Chiari, Ugo Della Croce, and Aurelio Cappozzo. 2005. Human movement analysis using stereophotogrammetry: Part 3. soft tissue artifact assessment and compensation. Gait & Posture 21 (2): 212–225. CrossRef
Minato, Takashi, Michihiro Shimada, Shoji Itakura, Kang Lee, and Hiroshi Ishiguro. 2005. Does gaze reveal the human likeness of an android? In Proceedings of the 4th International Conference on Development and Learning, 106–111. IEEE.
Macdorman, Karl F., Takashi Minato, and Michihiro Shimada. 2005. Assessing human likeness by eye contact in an android testbed. In Proceedings of the XXVII annual meeting of the cognitive science society.
- Generating Natural Motion in an Android by Mapping Human Motion
Karl F. MacDorman
- Springer Singapore
- Chapter 4
Neuer Inhalt/© Filograph | Getty Images | iStock