2006 | OriginalPaper | Buchkapitel
A Multi-Modal Interface for Road Planning Tasks Using Vision, Haptics and Sound
verfasst von : Matt Newcomb, Chris Harding
Erschienen in: Advances in Visual Computing
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Planning of transportation infrastructure requires analyzing combinations of many different types of geo-spatial information (maps). Displaying all of these maps together in a tradition Geographic Information System (GIS) limits its effectiveness with visual clutter and information overload. Multi-modal interfaces (MMIs) aim to improve the efficiency of human-computer interaction by combining several types of sensory modalities. We are presenting a prototype virtual environment using vision, haptics and sonification for multi-modal GIS scenarios such as road planning. We use a point-haptic device (Phantom) for various haptic effects and sonification to present additional non-visual data while drawing on a virtual canvas. We conducted a user study to gather experience with this multi-modal system and to learn more about how these users interact with geospatial data via various combinations of sensory modalities. The results indicate that certain forms of haptics and audio were preferentially used to present certain types of spatial data.