2014 | OriginalPaper | Buchkapitel
Narrative Map Augmentation with Automated Landmark Extraction and Path Inference
verfasst von : Vladimir Kulyukin, Thimma Reddy
Erschienen in: Computers Helping People with Special Needs
Verlag: Springer International Publishing
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Various technologies, including GPS, Wi-Fi localization, and infrared beacons, have been proposed to increase travel independence for visually impaired (VI) and blind travelers. Such systems take readings from sensors, localize those readings on a map, and instruct VI travelers where to move next. Unfortunately, sensor readings can be noisy or absent, which decreases the traveler’s situational awareness. However, localization technologies can be augmented with solutions that put the traveler’s cognition to use. One such solution is narrative maps, i.e., verbal descriptions of environments produced by O&M professionals for blind travelers. The production of narrative maps is costly, because O&M professionals must travel to designated environments and describe large numbers of routes. Complete narrative coverage may not be feasible due to the sheer size of many environments. But, the quality of produced narrative maps can be improved by automated landmark extraction and path inference. In this paper, an algorithm is proposed that uses scalable natural language processing (NLP) techniques to extract landmarks and their connectivity from verbal route descriptions. Extracted landmarks can be subsequently annotated with sensor readings, used to find new routes, or track the traveler’s progress on different routes.