2013 | OriginalPaper | Buchkapitel
Using the Transferable Belief Model for Multimodal Input Fusion in Companion Systems
verfasst von : Felix Schüssel, Frank Honold, Michael Weber
Erschienen in: Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Systems with multimodal interaction capabilities have gained a lot of attention in recent years. Especially so called companion systems that offer an adaptive, multimodal user interface show great promise for a natural human computer interaction. While more and more sophisticated sensors become available, current systems capable of accepting multimodal inputs (e.g. speech and gesture) still lack the robustness of input interpretation needed for companion systems. We demonstrate how evidential reasoning can be applied in the domain of graphical user interfaces in order to provide such reliability and robustness expected by users. For this purpose an existing approach using the Transferable Belief Model from the robotic domain is adapted and extended.