Human robot interaction (HRI) is one of emerging areas in robotics. When robots communicate with people, non-verbal communication including facial expressions, gestures and gaze plays an important role to express their emotion and intention effectively. Thus, many researches are carried out to generate proper non-verbal communications that would make robots to be considered as social agents. This paper proposes a method of generating facial expressions and gaze directions simultaneously. When external environment is perceived, robot’s emotion is changed either instantly or gradually. The emotion is used to generate facial expressions using the fuzzy measures and fuzzy integral. At the same time, a fuzzifier is applied to the perceived information to produce useful human information. The human information includes the number of faces and the size of faces, which can be used to approximate distances from the robot to faces. The human information is used to select a gaze behavior among four candidate behaviors. Through the proposed method, robots can generate proper facial expressions and gaze behaviors at the same time. The effectiveness of the proposed method is demonstrated through the simulation and the experiments with a robotic head, developed in the RIT Laboratory, KAIST.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
- A Simultaneous Generation Method for Gaze Behaviors and Facial Expressions of a Robotic Head
- Springer Berlin Heidelberg
Neuer Inhalt/© ITandMEDIA