2013 | OriginalPaper | Buchkapitel
A Test-Bed for Text-to-Speech-Based Pedestrian Navigation Systems
verfasst von : Michael Minock, Johan Mollevik, Mattias Åsander, Marcus Karlsson
Erschienen in: Natural Language Processing and Information Systems
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
This paper presents an Android system to support eyes-free, hands-free navigation through a city. The system operates in two distinct modes:
manual
and
automatic
. In manual, a human operator sends text messages which are realized via TTS into the subject’s earpiece. The operator sees the subject’s GPS position on a map, hears the subject’s speech, and sees a 1 fps movie taken from the subject’s phone, worn as a necklace. In automatic mode, a programmed controller attempts to achieve the same guidance task as the human operator.
We have fully built our manual system and have verified that it can be used to successfully guide pedestrians through a city. All activities are logged in the system into a single, large database state. We are building a series of automatic controllers which require us to confront a set of research challenges, some of which we briefly discuss in this paper. We plan to demonstrate our work live at NLDB.