2013 | OriginalPaper | Buchkapitel
Intent Capturing through Multimodal Inputs
verfasst von : Weimin Guo, Cheng Cheng, Mingkai Cheng, Yonghan Jiang, Honglin Tang
Erschienen in: Human-Computer Interaction. Interaction Modalities and Techniques
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Virtual manufacturing environments need complex and accurate 3D human-computer interaction. One main problem of current virtual environments (
VE
s) is the heavy overloads of the users on both cognitive and motor operational aspects. This paper investigated multimodal intent delivery and intent inferring in virtual environments. Eye gazing modality is added into virtual assembly system. Typical intents expressed by dual hands and eye gazing modalities are designed. The reliability and accuracy of eye gazing modality is examined through experiments. The experiments showed that eye gazing and hand multimodal cooperation has a great potential to enhance the naturalness and efficiency of human-computer interaction (
HCI
).